cs.RO(2025-02-19)

📊 共 9 篇论文 | 🔗 1 篇有代码

🎯 兴趣领域导航

支柱一:机器人控制 (Robot Control) (3) 支柱二:RL算法与架构 (RL & Architecture) (3) 支柱三:空间感知与语义 (Perception & Semantics) (2) 支柱八:物理动画 (Physics-based Animation) (1 🔗1)

🔬 支柱一:机器人控制 (Robot Control) (3 篇)

#题目一句话要点标签🔗
1 VLAS: Vision-Language-Action Model With Speech Instructions For Customized Robot Manipulation 提出VLAS模型,通过语音指令实现定制化机器人操作,解决传统方法依赖文本指令的问题。 manipulation vision-language-action VLA
2 Precise Mobile Manipulation of Small Everyday Objects 提出基于视觉模型的伺服控制框架SVM,用于移动机器人精准操作小型物体 manipulation mobile manipulation imitation learning
3 MILE: Model-based Intervention Learning 提出基于模型的干预学习方法MILE,仅需少量专家干预即可学习控制策略。 manipulation imitation learning

🔬 支柱二:RL算法与架构 (RL & Architecture) (3 篇)

#题目一句话要点标签🔗
4 Generative Predictive Control: Flow Matching Policies for Dynamic and Difficult-to-Demonstrate Tasks 提出生成式预测控制,解决动态和难示教任务的机器人控制问题 behavior cloning flow matching
5 Improving Collision-Free Success Rate For Object Goal Visual Navigation Via Two-Stage Training With Collision Prediction 提出基于碰撞预测的两阶段训练方法,提升物体目标视觉导航的无碰撞成功率。 reinforcement learning deep reinforcement learning egocentric
6 NavigateDiff: Visual Predictors are Zero-Shot Navigation Assistants 提出NavigateDiff,利用视觉预测器实现机器人零样本导航 reinforcement learning foundation model

🔬 支柱三:空间感知与语义 (Perception & Semantics) (2 篇)

#题目一句话要点标签🔗
7 MapNav: A Novel Memory Representation via Annotated Semantic Maps for Vision-and-Language Navigation 提出MapNav,利用带注释的语义地图进行视觉-语言导航 semantic map embodied AI VLN
8 Active Illumination for Visual Ego-Motion Estimation in the Dark 提出一种主动照明框架,提升黑暗环境下视觉里程计的位姿估计精度。 visual odometry visual SLAM

🔬 支柱八:物理动画 (Physics-based Animation) (1 篇)

#题目一句话要点标签🔗
9 Ephemerality meets LiDAR-based Lifelong Mapping 提出ELite:一种基于LiDAR和时效性的终身建图框架,用于动态环境下的长期机器人部署。 spatiotemporal

⬅️ 返回 cs.RO 首页 · 🏠 返回主页