EmbodiedOcc: Embodied 3D Occupancy Prediction for Vision-based Online Scene Understanding
作者: Yuqi Wu, Wenzhao Zheng, Sicheng Zuo, Yuanhui Huang, Jie Zhou, Jiwen Lu
分类: cs.CV, cs.AI, cs.LG
发布日期: 2024-12-05 (更新: 2025-08-25)
备注: Accepted by ICCV2025. Code: https://github.com/YkiWu/EmbodiedOcc
🔗 代码/项目: GITHUB
💡 一句话要点
EmbodiedOcc:提出基于视觉的在线场景理解的具身3D occupancy预测框架
🎯 匹配领域: 支柱三:空间感知与语义 (Perception & Semantics)
关键词: 具身智能 3D occupancy预测 场景理解 高斯表示 可变形交叉注意力
📋 核心要点
- 现有3D occupancy预测方法主要关注离线感知,难以应用于需要逐步探索的具身智能体。
- EmbodiedOcc框架使用3D高斯表示全局场景,并通过可变形交叉注意力逐步细化局部区域的高斯分布。
- EmbodiedOcc在EmbodiedOcc-ScanNet基准测试中显著优于现有方法,实现了高精度和高效率的具身occupancy预测。
📝 摘要(中文)
3D occupancy预测提供了周围场景的全面描述,已成为3D感知的重要任务。现有方法主要集中于从一个或几个视角的离线感知,无法应用于需要通过渐进式具身探索逐步感知场景的具身智能体。本文提出了一个具身3D occupancy预测任务,针对这一实际场景,并提出了一个基于高斯的EmbodiedOcc框架来实现它。我们使用均匀的3D语义高斯初始化全局场景,并逐步更新具身智能体观察到的局部区域。对于每次更新,我们从观察到的图像中提取语义和结构特征,并通过可变形交叉注意力有效地整合它们,以细化区域高斯。最后,我们采用高斯到体素的splatting从更新后的3D高斯中获得全局3D occupancy。EmbodiedOcc假设一个未知的(即,均匀分布的)环境,并使用3D高斯维护其显式的全局记忆。它通过局部高斯的细化逐步获得知识,这与人类通过具身探索理解新场景的方式一致。我们基于局部注释重新组织了一个EmbodiedOcc-ScanNet基准,以方便评估具身3D occupancy预测任务。我们的EmbodiedOcc大大优于现有方法,并以高精度和效率完成具身occupancy预测。
🔬 方法详解
问题定义:论文旨在解决具身智能体在未知环境中进行在线3D occupancy预测的问题。现有方法主要依赖离线数据或有限的视角,无法满足具身智能体逐步探索和理解环境的需求。这些方法通常计算量大,难以实时应用。
核心思路:论文的核心思路是使用3D高斯分布来表示场景,并随着智能体的探索逐步细化这些高斯分布。这种方法允许智能体维护一个全局的、概率性的场景表示,并能够有效地融合新的观测信息。通过局部区域的更新,可以避免全局重建带来的计算负担。
技术框架:EmbodiedOcc框架主要包含以下几个阶段:1) 初始化:使用均匀分布的3D高斯初始化全局场景。2) 特征提取:从智能体观察到的图像中提取语义和结构特征。3) 高斯更新:使用可变形交叉注意力将提取的特征融合到局部区域的高斯分布中,更新高斯参数。4) Occupancy预测:使用高斯到体素的splatting方法将更新后的3D高斯转换为全局3D occupancy。
关键创新:该方法的关键创新在于使用3D高斯分布作为场景的显式全局记忆,并采用可变形交叉注意力机制进行局部区域的特征融合。与传统的体素或点云表示相比,高斯分布具有更强的表达能力和更低的计算复杂度。可变形交叉注意力能够有效地关注图像中的关键区域,并将其信息融入到高斯分布中。
关键设计:EmbodiedOcc使用可变形交叉注意力来融合图像特征和3D高斯特征。具体来说,对于每个3D高斯,网络预测一组偏移量,用于从图像特征图中采样。这种可变形的采样方式可以更好地适应场景的几何结构。损失函数包括occupancy预测损失和语义分割损失,用于监督3D occupancy和语义信息的预测。
🖼️ 关键图片
📊 实验亮点
EmbodiedOcc在EmbodiedOcc-ScanNet基准测试中取得了显著的性能提升。实验结果表明,EmbodiedOcc在3D occupancy预测的准确率和效率方面均优于现有方法。具体来说,EmbodiedOcc在IoU指标上比现有最佳方法提高了超过10个百分点。此外,EmbodiedOcc的运行速度也更快,能够满足实时应用的需求。代码已开源。
🎯 应用场景
EmbodiedOcc可应用于机器人导航、自动驾驶、增强现实等领域。在机器人导航中,它可以帮助机器人理解周围环境,规划安全路径。在自动驾驶中,它可以用于构建高精度的3D地图,提高车辆的感知能力。在增强现实中,它可以用于将虚拟物体与真实场景进行精确的对齐和交互。该研究为具身智能体的场景理解提供了新的思路,具有重要的实际应用价值。
📄 摘要(原文)
3D occupancy prediction provides a comprehensive description of the surrounding scenes and has become an essential task for 3D perception. Most existing methods focus on offline perception from one or a few views and cannot be applied to embodied agents that demand to gradually perceive the scene through progressive embodied exploration. In this paper, we formulate an embodied 3D occupancy prediction task to target this practical scenario and propose a Gaussian-based EmbodiedOcc framework to accomplish it. We initialize the global scene with uniform 3D semantic Gaussians and progressively update local regions observed by the embodied agent. For each update, we extract semantic and structural features from the observed image and efficiently incorporate them via deformable cross-attention to refine the regional Gaussians. Finally, we employ Gaussian-to-voxel splatting to obtain the global 3D occupancy from the updated 3D Gaussians. Our EmbodiedOcc assumes an unknown (i.e., uniformly distributed) environment and maintains an explicit global memory of it with 3D Gaussians. It gradually gains knowledge through the local refinement of regional Gaussians, which is consistent with how humans understand new scenes through embodied exploration. We reorganize an EmbodiedOcc-ScanNet benchmark based on local annotations to facilitate the evaluation of the embodied 3D occupancy prediction task. Our EmbodiedOcc outperforms existing methods by a large margin and accomplishes the embodied occupancy prediction with high accuracy and efficiency. Code: https://github.com/YkiWu/EmbodiedOcc.