cs.RO(2026-01-22)

📊 共 10 篇论文 | 🔗 2 篇有代码

🎯 兴趣领域导航

支柱一:机器人控制 (Robot Control) (7 🔗1) 支柱二:RL算法与架构 (RL & Architecture) (1) 支柱九:具身大模型 (Embodied Foundation Models) (1) 支柱三:空间感知与语义 (Perception & Semantics) (1 🔗1)

🔬 支柱一:机器人控制 (Robot Control) (7 篇)

#题目一句话要点标签🔗
1 PUMA: Perception-driven Unified Foothold Prior for Mobility Augmented Quadruped Parkour PUMA:感知驱动的统一落脚点先验,增强四足机器人跑酷的灵活性 quadruped legged robot locomotion
2 Efficiently Learning Robust Torque-based Locomotion Through Reinforcement with Model-Based Supervision 提出一种基于模型监督的强化学习方法,高效学习鲁棒的力矩控制双足运动。 bipedal biped whole-body control
3 Point Bridge: 3D Representations for Cross Domain Policy Learning Point Bridge:利用点云表示实现跨域策略学习,解决Sim2Real迁移问题 manipulation sim-to-real policy learning
4 IVRA: Improving Visual-Token Relations for Robot Action Policy with Training-Free Hint-Based Guidance IVRA:利用免训练提示引导,改善视觉Token关系,提升机器人动作策略 manipulation vision-language-action VLA
5 Collision-Free Humanoid Traversal in Cluttered Indoor Scenes 提出HumanoidPF,解决复杂室内场景中人型机器人无碰撞导航问题 humanoid sim-to-real teleoperation
6 AION: Aerial Indoor Object-Goal Navigation Using Dual-Policy Reinforcement Learning AION:基于双策略强化学习的无人机室内目标导航 locomotion reinforcement learning
7 DextER: Language-driven Dexterous Grasp Generation with Embodied Reasoning DextER:提出基于具身推理的灵巧抓取生成方法,提升任务语义理解和物理交互能力。 manipulation

🔬 支柱二:RL算法与架构 (RL & Architecture) (1 篇)

#题目一句话要点标签🔗
8 D-Optimality-Guided Reinforcement Learning for Efficient Open-Loop Calibration of a 3-DOF Ankle Rehabilitation Robot 提出基于D-最优性指导的强化学习方法,高效标定3自由度踝关节康复机器人。 reinforcement learning PPO

🔬 支柱九:具身大模型 (Embodied Foundation Models) (1 篇)

#题目一句话要点标签🔗
9 TeNet: Text-to-Network for Compact Policy Synthesis TeNet:提出一种文本到网络的紧凑策略合成方法,用于资源受限的机器人控制任务。 large language model

🔬 支柱三:空间感知与语义 (Perception & Semantics) (1 篇)

#题目一句话要点标签🔗
10 Accurate Calibration and Robust LiDAR-Inertial Odometry for Spinning Actuated LiDAR Systems 针对旋转式激光雷达系统,提出精确标定和鲁棒的激光雷达惯性里程计方法 LIO

⬅️ 返回 cs.RO 首页 · 🏠 返回主页