cs.RO(2024-08-06)
📊 共 8 篇论文 | 🔗 1 篇有代码
🎯 兴趣领域导航
支柱一:机器人控制 (Robot Control) (4 🔗1)
支柱二:RL算法与架构 (RL & Architecture) (3)
支柱九:具身大模型 (Embodied Foundation Models) (1)
🔬 支柱一:机器人控制 (Robot Control) (4 篇)
| # | 题目 | 一句话要点 | 标签 | 🔗 | ⭐ |
|---|---|---|---|---|---|
| 1 | Faster Model Predictive Control via Self-Supervised Initialization Learning | 提出自监督初始化学习加速模型预测控制,提升优化速度与控制精度。 | MPC model predictive control reinforcement learning | ||
| 2 | KOI: Accelerating Online Imitation Learning via Hybrid Key-state Guidance | KOI:通过混合关键状态引导加速在线模仿学习 | manipulation imitation learning optical flow | ✅ | |
| 3 | Stochastic Trajectory Optimization for Robotic Skill Acquisition From a Suboptimal Demonstration | 提出MSTOMP算法,从次优示教轨迹中学习并优化机器人技能 | trajectory optimization motion planning | ||
| 4 | Integrating Controllable Motion Skills from Demonstrations | 提出可控技能集成框架以解决多样化运动技能整合问题 | legged robot reinforcement learning |
🔬 支柱二:RL算法与架构 (RL & Architecture) (3 篇)
| # | 题目 | 一句话要点 | 标签 | 🔗 | ⭐ |
|---|---|---|---|---|---|
| 5 | Integrated Intention Prediction and Decision-Making with Spectrum Attention Net and Proximal Policy Optimization | 提出频谱注意力网络与PPO集成的意图预测与决策框架,用于自动驾驶。 | reinforcement learning deep reinforcement learning DRL | ||
| 6 | Adversarial Safety-Critical Scenario Generation using Naturalistic Human Driving Priors | 提出一种基于自然驾驶先验的对抗性安全关键场景生成方法,用于自动驾驶决策系统评估。 | reinforcement learning PPO imitation learning | ||
| 7 | Learning to Turn: Diffusion Imitation for Robust Row Turning in Under-Canopy Robots | 提出基于扩散模仿学习的行间转向方法,提升农业机器人自主导航能力 | imitation learning diffusion policy |
🔬 支柱九:具身大模型 (Embodied Foundation Models) (1 篇)
| # | 题目 | 一句话要点 | 标签 | 🔗 | ⭐ |
|---|---|---|---|---|---|
| 8 | Few-shot Scooping Under Domain Shift via Simulated Maximal Deployment Gaps | 提出kCMD方法,通过模拟最大部署差距解决域偏移下的少样本挖掘问题 | zero-shot transfer |