NVIDIA Nemotron Nano 2: An Accurate and Efficient Hybrid Mamba-Transformer Reasoning Model
作者: NVIDIA, :, Aarti Basant, Abhijit Khairnar, Abhijit Paithankar, Abhinav Khattar, Adithya Renduchintala, Aditya Malte, Akhiad Bercovich, Akshay Hazare, Alejandra Rico, Aleksander Ficek, Alex Kondratenko, Alex Shaposhnikov, Alexander Bukharin, Ali Taghibakhshi, Amelia Barton, Ameya Sunil Mahabaleshwarkar, Amy Shen, Andrew Tao, Ann Guan, Anna Shors, Anubhav Mandarwal, Arham Mehta, Arun Venkatesan, Ashton Sharabiani, Ashwath Aithal, Ashwin Poojary, Ayush Dattagupta, Balaram Buddharaju, Banghua Zhu, Barnaby Simkin, Bilal Kartal, Bita Darvish Rouhani, Bobby Chen, Boris Ginsburg, Brandon Norick, Brian Yu, Bryan Catanzaro, Charles Wang, Charlie Truong, Chetan Mungekar, Chintan Patel, Chris Alexiuk, Christian Munley, Christopher Parisien, Dan Su, Daniel Afrimi, Daniel Korzekwa, Daniel Rohrer, Daria Gitman, David Mosallanezhad, Deepak Narayanan, Dima Rekesh, Dina Yared, Dmytro Pykhtar, Dong Ahn, Duncan Riach, Eileen Long, Elliott Ning, Eric Chung, Erick Galinkin, Evelina Bakhturina, Gargi Prasad, Gerald Shen, Haifeng Qian, Haim Elisha, Harsh Sharma, Hayley Ross, Helen Ngo, Herman Sahota, Hexin Wang, Hoo Chang Shin, Hua Huang, Iain Cunningham, Igor Gitman, Ivan Moshkov, Jaehun Jung, Jan Kautz, Jane Polak Scowcroft, Jared Casper, Jian Zhang, Jiaqi Zeng, Jimmy Zhang, Jinze Xue, Jocelyn Huang, Joey Conway, John Kamalu, Jonathan Cohen, Joseph Jennings, Julien Veron Vialard, Junkeun Yi, Jupinder Parmar, Kari Briski, Katherine Cheung, Katherine Luna, Keith Wyss, Keshav Santhanam, Kezhi Kong, Krzysztof Pawelec, Kumar Anik, Kunlun Li, Kushan Ahmadian, Lawrence McAfee, Laya Sleiman, Leon Derczynski, Luis Vega, Maer Rodrigues de Melo, Makesh Narsimhan Sreedhar, Marcin Chochowski, Mark Cai, Markus Kliegl, Marta Stepniewska-Dziubinska, Matvei Novikov, Mehrzad Samadi, Meredith Price, Meriem Boubdir, Michael Boone, Michael Evans, Michal Bien, Michal Zawalski, Miguel Martinez, Mike Chrzanowski, Mohammad Shoeybi, Mostofa Patwary, Namit Dhameja, Nave Assaf, Negar Habibi, Nidhi Bhatia, Nikki Pope, Nima Tajbakhsh, Nirmal Kumar Juluru, Oleg Rybakov, Oleksii Hrinchuk, Oleksii Kuchaiev, Oluwatobi Olabiyi, Pablo Ribalta, Padmavathy Subramanian, Parth Chadha, Pavlo Molchanov, Peter Dykas, Peter Jin, Piotr Bialecki, Piotr Januszewski, Pradeep Thalasta, Prashant Gaikwad, Prasoon Varshney, Pritam Gundecha, Przemek Tredak, Rabeeh Karimi Mahabadi, Rajen Patel, Ran El-Yaniv, Ranjit Rajan, Ria Cheruvu, Rima Shahbazyan, Ritika Borkar, Ritu Gala, Roger Waleffe, Ruoxi Zhang, Russell J. Hewett, Ryan Prenger, Sahil Jain, Samuel Kriman, Sanjeev Satheesh, Saori Kaji, Sarah Yurick, Saurav Muralidharan, Sean Narenthiran, Seonmyeong Bak, Sepehr Sameni, Seungju Han, Shanmugam Ramasamy, Shaona Ghosh, Sharath Turuvekere Sreenivas, Shelby Thomas, Shizhe Diao, Shreya Gopal, Shrimai Prabhumoye, Shubham Toshniwal, Shuoyang Ding, Siddharth Singh, Siddhartha Jain, Somshubra Majumdar, Soumye Singhal, Stefania Alborghetti, Syeda Nahida Akter, Terry Kong, Tim Moon, Tomasz Hliwiak, Tomer Asida, Tony Wang, Tugrul Konuk, Twinkle Vashishth, Tyler Poon, Udi Karpas, Vahid Noroozi, Venkat Srinivasan, Vijay Korthikanti, Vikram Fugro, Vineeth Kalluru, Vitaly Kurin, Vitaly Lavrukhin, Wasi Uddin Ahmad, Wei Du, Wonmin Byeon, Ximing Lu, Xin Dong, Yashaswi Karnati, Yejin Choi, Yian Zhang, Ying Lin, Yonggan Fu, Yoshi Suhara, Zhen Dong, Zhiyu Li, Zhongbo Zhu, Zijia Chen
分类: cs.CL, cs.AI, cs.LG
发布日期: 2025-08-20 (更新: 2025-09-02)
💡 一句话要点
提出Nemotron-Nano-9B-v2以提升推理任务的准确性与效率
🎯 匹配领域: 支柱二:RL算法与架构 (RL & Architecture)
关键词: 混合模型 推理任务 Mamba-Transformer 模型压缩 高吞吐量 自然语言处理 深度学习
📋 核心要点
- 现有的推理模型在处理长输入和输出时面临吞吐量不足和准确性不高的挑战。
- 论文提出的Nemotron-Nano-9B-v2模型通过引入Mamba-2层替代传统自注意力层,显著提升推理速度和准确性。
- 实验结果表明,Nemotron-Nano-9B-v2在推理基准测试中实现了与同类模型相当或更好的准确性,同时推理吞吐量提高了6倍。
📝 摘要(中文)
我们介绍了Nemotron-Nano-9B-v2,这是一种混合Mamba-Transformer语言模型,旨在提高推理工作负载的吞吐量,同时在与同类模型相比时实现最先进的准确性。Nemotron-Nano-9B-v2基于Nemotron-H架构,替换了大部分自注意力层,以提高生成推理所需的长思维轨迹时的推理速度。通过在20万亿个标记上预训练一个120亿参数的模型,并使用Minitron策略进行压缩和蒸馏,Nemotron-Nano-9B-v2在推理设置下的吞吐量提高了多达6倍,同时在推理基准测试中表现出与现有同类模型相当或更好的准确性。我们将在Hugging Face上发布Nemotron-Nano-9B-v2及其相关检查点和数据集。
🔬 方法详解
问题定义:本论文旨在解决现有推理模型在处理长输入和输出时的吞吐量不足和准确性不高的问题。现有的Transformer架构在推理速度和效率上存在明显的瓶颈。
核心思路:Nemotron-Nano-9B-v2通过将大部分自注意力层替换为Mamba-2层,旨在提高推理速度,同时保持或提升模型的准确性。这种设计使得模型能够更高效地生成推理所需的长思维轨迹。
技术框架:该模型的整体架构基于Nemotron-H,首先预训练一个120亿参数的基础模型,然后通过Minitron策略进行压缩和蒸馏,以便在单个NVIDIA A10G GPU上实现高达128k的推理能力。
关键创新:Nemotron-Nano-9B-v2的主要创新在于引入了Mamba-2层,这一设计显著提升了推理速度,并在准确性上与现有同类模型持平或更优。与传统Transformer模型相比,这种混合架构在推理效率上具有本质的区别。
关键设计:在训练过程中,采用了FP8训练策略,并在20万亿个标记上进行预训练。模型的压缩和蒸馏过程通过Minitron策略实现,确保在推理时能够在较低的内存占用下处理更长的输入和输出。具体的参数设置和损失函数设计在论文中有详细阐述。
📊 实验亮点
实验结果显示,Nemotron-Nano-9B-v2在推理基准测试中达到了与Qwen3-8B等同类模型相当或更好的准确性,同时在8k输入和16k输出的推理设置下,吞吐量提升高达6倍。这一显著的性能提升展示了该模型在推理任务中的优越性。
🎯 应用场景
Nemotron-Nano-9B-v2模型具有广泛的应用潜力,特别是在需要高效推理的自然语言处理任务中,如对话系统、文本生成和复杂推理任务。其高吞吐量和准确性使其适用于实时应用场景,能够显著提升用户体验。未来,该模型还可能在多模态学习和大规模数据处理等领域发挥重要作用。
📄 摘要(原文)
We introduce Nemotron-Nano-9B-v2, a hybrid Mamba-Transformer language model designed to increase throughput for reasoning workloads while achieving state-of-the-art accuracy compared to similarly-sized models. Nemotron-Nano-9B-v2 builds on the Nemotron-H architecture, in which the majority of the self-attention layers in the common Transformer architecture are replaced with Mamba-2 layers, to achieve improved inference speed when generating the long thinking traces needed for reasoning. We create Nemotron-Nano-9B-v2 by first pre-training a 12-billion-parameter model (Nemotron-Nano-12B-v2-Base) on 20 trillion tokens using an FP8 training recipe. After aligning Nemotron-Nano-12B-v2-Base, we employ the Minitron strategy to compress and distill the model with the goal of enabling inference on up to 128k tokens on a single NVIDIA A10G GPU (22GiB of memory, bfloat16 precision). Compared to existing similarly-sized models (e.g., Qwen3-8B), we show that Nemotron-Nano-9B-v2 achieves on-par or better accuracy on reasoning benchmarks while achieving up to 6x higher inference throughput in reasoning settings like 8k input and 16k output tokens. We are releasing Nemotron-Nano-9B-v2, Nemotron-Nano12B-v2-Base, and Nemotron-Nano-9B-v2-Base checkpoints along with the majority of our pre- and post-training datasets on Hugging Face.