HannesImitation: Grasping with the Hannes Prosthetic Hand via Imitation Learning
作者: Carlo Alessi, Federico Vasile, Federico Ceola, Giulia Pasquale, Nicolò Boccardo, Lorenzo Natale
分类: cs.RO, cs.AI
发布日期: 2025-08-01
备注: Paper accepted at IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)
期刊: IEEE/RSJ International Conference on Intelligent Robots and Systems, Hangzhou, China, 2025
🔗 代码/项目: PROJECT_PAGE
💡 一句话要点
提出HannesImitation以解决假肢抓取控制问题
🎯 匹配领域: 支柱一:机器人控制 (Robot Control) 支柱二:RL算法与架构 (RL & Architecture)
关键词: 假肢控制 模仿学习 机器人抓取 视觉伺服 数据驱动
📋 核心要点
- 现有假肢手控制方法在复杂环境中表现不足,无法有效应对多样化的抓取任务。
- 本文提出HannesImitationPolicy,通过模仿学习实现假肢手的抓取控制,简化数据收集过程。
- 实验表明,HannesImitationPolicy在多种物体和条件下均能成功抓取,且优于传统的视觉伺服控制方法。
📝 摘要(中文)
近年来,假肢手的控制技术不断进步,重点在于通过摄像头等传感器提高自主性,减轻用户的认知负担。然而,模仿学习在假肢手控制中的应用仍然较少。为此,本文提出了HannesImitationPolicy,一种基于模仿学习的方法,用于控制Hannes假肢手,实现非结构化环境中的物体抓取。同时,我们构建了HannesImitationDataset,包含多种抓取演示数据。通过训练单一的扩散策略,本文在假肢手上成功预测手腕方向和手部闭合,实验结果显示在多种物体和条件下均能成功抓取,并且该策略在非结构化场景中优于基于分割的视觉伺服控制器。
🔬 方法详解
问题定义:本文旨在解决假肢手在非结构化环境中抓取控制的挑战,现有方法往往依赖手动标注的数据,限制了其灵活性和适应性。
核心思路:通过模仿学习,利用抓取演示数据训练控制策略,使假肢手能够在多样化的环境中自主学习抓取任务,降低用户的认知负担。
技术框架:整体架构包括数据收集、策略训练和控制执行三个主要模块。首先,收集多种抓取场景的数据;其次,使用这些数据训练扩散策略;最后,将训练好的策略部署到假肢手上进行实时控制。
关键创新:本文的主要创新在于将模仿学习应用于假肢手控制,填补了这一领域的研究空白,并通过数据驱动的方法提高了抓取的灵活性和准确性。
关键设计:在训练过程中,采用了单一的扩散策略,优化了手腕方向和手部闭合的预测,损失函数设计考虑了抓取成功率和稳定性,确保了控制的精确性。
📊 实验亮点
实验结果显示,HannesImitationPolicy在多种物体和条件下均能成功抓取,成功率显著高于传统的基于分割的视觉伺服控制器,具体性能提升幅度未知,表明该方法在非结构化环境中的有效性和优势。
🎯 应用场景
该研究的潜在应用领域包括假肢技术、机器人抓取和人机交互等。通过提高假肢手的抓取能力,能够显著改善用户的生活质量,帮助他们在日常生活中更自如地进行物体操作。此外,该方法的灵活性和适应性也为未来的智能机器人应用提供了新的思路。
📄 摘要(原文)
Recent advancements in control of prosthetic hands have focused on increasing autonomy through the use of cameras and other sensory inputs. These systems aim to reduce the cognitive load on the user by automatically controlling certain degrees of freedom. In robotics, imitation learning has emerged as a promising approach for learning grasping and complex manipulation tasks while simplifying data collection. Its application to the control of prosthetic hands remains, however, largely unexplored. Bridging this gap could enhance dexterity restoration and enable prosthetic devices to operate in more unconstrained scenarios, where tasks are learned from demonstrations rather than relying on manually annotated sequences. To this end, we present HannesImitationPolicy, an imitation learning-based method to control the Hannes prosthetic hand, enabling object grasping in unstructured environments. Moreover, we introduce the HannesImitationDataset comprising grasping demonstrations in table, shelf, and human-to-prosthesis handover scenarios. We leverage such data to train a single diffusion policy and deploy it on the prosthetic hand to predict the wrist orientation and hand closure for grasping. Experimental evaluation demonstrates successful grasps across diverse objects and conditions. Finally, we show that the policy outperforms a segmentation-based visual servo controller in unstructured scenarios. Additional material is provided on our project page: https://hsp-iit.github.io/HannesImitation