A1: A Fully Transparent Open-Source, Adaptive and Efficient Truncated Vision-Language-Action Model
作者: Kaidong Zhang, Jian Zhang, Rongtao Xu, Yu Sun, Shuoshuo Xue, Youpeng Wen, Xiaoyu Guo, Minghao Guo, Weijia Liufu, Liu Zihou, Kangyi Ji, Yangsong Zhang, Jiarun Zhu, Jingzhi Liu, Zihang Li, Ruiyi Chen, Meng Cao, Jingming Zhang, Shen Zhao, Xiaojun Chang, Feng Zheng, Ivan Laptev, Xiaodan Liang
分类: cs.RO
发布日期: 2026-04-07
💡 一句话要点
提出A1框架以降低视觉-语言-动作模型的推理成本
🎯 匹配领域: 支柱一:机器人控制 (Robot Control) 支柱二:RL算法与架构 (RL & Architecture) 支柱三:空间感知与语义 (Perception & Semantics) 支柱九:具身大模型 (Embodied Foundation Models)
关键词: 视觉-语言-动作 自适应推理 机器人操作 开源框架 计算效率 动作生成 深度学习
📋 核心要点
- 现有的视觉-语言-动作模型在实际应用中面临高延迟和计算成本的问题,限制了其在机器人操作中的有效性。
- A1框架通过引入预算感知的自适应推理方案,优化了整个推理流程,提升了推理效率和成功率。
- 在多个仿真基准和真实机器人测试中,A1的成功率显著高于现有基线,且推理成本大幅降低。
📝 摘要(中文)
视觉-语言-动作(VLA)模型在开放世界机器人操作中展现出强大的潜力,但由于其高昂的计算成本和延迟,实际部署受到限制。本文提出A1,一个完全开源的VLA框架,旨在实现低成本、高吞吐量的推理,同时确保操作成功率。A1利用预训练的VLM提供隐式的动作生成先验,并引入预算感知的自适应推理方案,通过监测动作一致性实现早期终止,显著降低推理成本。实验结果表明,A1在多个基准测试中表现出色,推理延迟降低高达72%,计算量减少76.6%。
🔬 方法详解
问题定义:本文旨在解决现有视觉-语言-动作模型在实际部署中面临的高计算成本和延迟问题,导致实时控制变得昂贵。
核心思路:A1框架通过利用预训练的视觉-语言模型(VLM)提供隐式的动作生成先验,并引入自适应推理方案,优化整个推理流程。
技术框架:A1的整体架构包括预训练的VLM、动作生成模块和自适应推理机制。通过监测中间层的动作一致性,A1能够实现早期终止,减少不必要的计算。
关键创新:A1的核心创新在于引入了“层间截断流匹配”技术,能够在多个层之间进行有效的去噪,显著减少所需的去噪迭代次数,从而加速推理过程。
关键设计:A1的设计中包括监测动作一致性的机制、预算感知的推理策略,以及优化的损失函数和网络结构,以确保在降低计算成本的同时保持高成功率。
🖼️ 关键图片
📊 实验亮点
在多个仿真基准(如LIBERO和VLABench)和真实机器人(如Franka和AgiBot)测试中,A1的成功率达到了29.00%,显著高于pi0(28.33%)、X-VLA(21.33%)和RDT-1B(15.00%)等基线,同时推理延迟降低高达72%,计算量减少76.6%。
🎯 应用场景
A1框架的潜在应用场景包括智能机器人、自动化生产线和人机交互系统等领域。通过降低推理成本和延迟,A1能够使得视觉-语言-动作模型在实际应用中更具可行性,推动机器人技术的广泛应用与发展。
📄 摘要(原文)
Vision--Language--Action (VLA) models have emerged as a powerful paradigm for open-world robot manipulation, but their practical deployment is often constrained by \emph{cost}: billion-scale VLM backbones and iterative diffusion/flow-based action heads incur high latency and compute, making real-time control expensive on commodity hardware. We present A1, a fully open-source and transparent VLA framework designed for low-cost, high-throughput inference without sacrificing manipulation success; Our approach leverages pretrained VLMs that provide implicit affordance priors for action generation. We release the full training stack (training code, data/data-processing pipeline, intermediate checkpoints, and evaluation scripts) to enable end-to-end reproducibility. Beyond optimizing the VLM alone, A1 targets the full inference pipeline by introducing a budget-aware adaptive inference scheme that jointly accelerates the backbone and the \emph{action head}. Specifically, we monitor action consistency across intermediate VLM layers to trigger early termination, and propose Inter-Layer Truncated Flow Matching that warm-starts denoising across layers, enabling accurate actions with substantially fewer effective denoising iterations. Across simulation benchmarks (LIBERO, VLABench) and real robots (Franka, AgiBot), A1 achieves state-of-the-art success rates while significantly reducing inference cost (e.g., up to 72% lower per-episode latency for flow-matching inference and up to 76.6% backbone computation reduction with minor performance degradation). On RoboChallenge, A1 achieves an average success rate of 29.00%, outperforming baselines including pi0(28.33%), X-VLA (21.33%), and RDT-1B (15.00%).