ReasonFlux-PRM: Trajectory-Aware PRMs for Long Chain-of-Thought Reasoning in LLMs
作者: Jiaru Zou, Ling Yang, Jingwen Gu, Jiahao Qiu, Ke Shen, Jingrui He, Mengdi Wang
分类: cs.CL
发布日期: 2025-06-23 (更新: 2025-09-25)
备注: Accepted by NeurIPS 2025. Project: https://github.com/Gen-Verse/ReasonFlux
🔗 代码/项目: GITHUB
💡 一句话要点
提出ReasonFlux-PRM以解决长链推理中的奖励评估问题
🎯 匹配领域: 支柱二:RL算法与架构 (RL & Architecture) 支柱九:具身大模型 (Embodied Foundation Models)
关键词: 过程奖励模型 长链推理 轨迹感知 强化学习 模型蒸馏
📋 核心要点
- 现有的过程奖励模型在评估中间推理轨迹时存在不足,尤其是在处理轨迹-响应输出时表现不佳。
- 本文提出的ReasonFlux-PRM通过结合步骤级和轨迹级监督,能够更精确地评估推理过程中的每一步。
- 实验结果显示,ReasonFlux-PRM-7B在多个基准测试中表现优异,平均提升12.1%在监督微调,4.5%在强化学习,6.3%在测试时扩展。
📝 摘要(中文)
过程奖励模型(PRMs)最近成为监督大型语言模型(LLMs)中间推理步骤的强大框架。然而,现有的PRMs主要基于模型最终输出进行训练,难以有效评估由前沿推理模型生成的中间思维轨迹。为此,本文提出了ReasonFlux-PRM,这是一种新颖的轨迹感知PRM,旨在评估轨迹-响应类型的推理痕迹。ReasonFlux-PRM结合了步骤级和轨迹级的监督,能够实现与结构化链推理数据对齐的细粒度奖励分配。我们将ReasonFlux-PRM适配为支持离线和在线设置下的奖励监督,并在多个下游基准上取得了显著的性能提升。
🔬 方法详解
问题定义:本文旨在解决现有过程奖励模型在评估中间推理轨迹时的不足,尤其是对轨迹-响应输出的评估能力较弱,导致无法有效指导模型的推理过程。
核心思路:ReasonFlux-PRM的核心思路是通过引入轨迹感知的奖励机制,结合步骤级和轨迹级的监督,来实现对推理过程的细粒度评估,从而提升模型的推理质量和效率。
技术框架:ReasonFlux-PRM的整体架构包括数据选择模块、奖励分配模块和模型优化模块。数据选择模块负责筛选高质量的模型蒸馏数据,奖励分配模块则根据推理轨迹为每一步分配奖励,模型优化模块则利用这些奖励进行强化学习和微调。
关键创新:ReasonFlux-PRM的主要创新在于其轨迹感知的奖励机制,能够在推理过程中提供更为细致的反馈,与传统的基于最终输出的奖励机制相比,显著提升了模型的推理能力。
关键设计:在设计上,ReasonFlux-PRM采用了多层次的损失函数,结合了步骤级和轨迹级的奖励信号,并在网络结构上进行了优化,以适应不同的训练和推理场景。
📊 实验亮点
实验结果表明,ReasonFlux-PRM-7B在多个基准测试(如AIME、MATH500和GPQA-Diamond)中表现优异,选择的数据质量高于强基线(如Qwen2.5-Math-PRM-72B),并在监督微调中平均提升12.1%,在强化学习中提升4.5%,在测试时扩展中提升6.3%。
🎯 应用场景
ReasonFlux-PRM的研究成果在多个领域具有广泛的应用潜力,包括自然语言处理、智能问答系统和教育技术等。通过提升大型语言模型的推理能力,该方法可以帮助开发更智能的对话系统和自动化学习工具,推动人工智能技术的进一步发展。
📄 摘要(原文)
Process Reward Models (PRMs) have recently emerged as a powerful framework for supervising intermediate reasoning steps in large language models (LLMs). Previous PRMs are primarily trained on model final output responses and struggle to evaluate intermediate thinking trajectories robustly, especially in the emerging setting of trajectory-response outputs generated by frontier reasoning models like Deepseek-R1. In this work, we introduce ReasonFlux-PRM, a novel trajectory-aware PRM explicitly designed to evaluate the trajectory-response type of reasoning traces. ReasonFlux-PRM incorporates both step-level and trajectory-level supervision, enabling fine-grained reward assignment aligned with structured chain-of-thought data. We adapt ReasonFlux-PRM to support reward supervision under both offline and online settings, including (i) selecting high-quality model distillation data for downstream supervised fine-tuning of smaller models, (ii) providing dense process-level rewards for policy optimization during reinforcement learning, and (iii) enabling reward-guided Best-of-N test-time scaling. Empirical results on challenging downstream benchmarks such as AIME, MATH500, and GPQA-Diamond demonstrate that ReasonFlux-PRM-7B selects higher quality data than strong PRMs (e.g., Qwen2.5-Math-PRM-72B) and human-curated baselines. Furthermore, our derived ReasonFlux-PRM-7B yields consistent performance improvements, achieving average gains of 12.1% in supervised fine-tuning, 4.5% in reinforcement learning, and 6.3% in test-time scaling. We also release our efficient ReasonFlux-PRM-1.5B for resource-constrained applications and edge deployment. Project: https://github.com/Gen-Verse/ReasonFlux