cs.CL(2025-12-01)

📊 共 18 篇论文 | 🔗 4 篇有代码

🎯 兴趣领域导航

支柱九:具身大模型 (Embodied Foundation Models) (12 🔗2) 支柱二:RL算法与架构 (RL & Architecture) (6 🔗2)

🔬 支柱九:具身大模型 (Embodied Foundation Models) (12 篇)

#题目一句话要点标签🔗
1 Enhancing Foundation Models in Transaction Understanding with LLM-based Sentence Embeddings 利用LLM句子嵌入增强交易理解中的基础模型 large language model foundation model
2 Securing Large Language Models (LLMs) from Prompt Injection Attacks 评估JATMO防御LLM免受提示注入攻击的有效性,揭示其局限性与改进方向 large language model instruction following
3 DETAIL Matters: Measuring the Impact of Prompt Specificity on Reasoning in Large Language Models DETAIL框架:评估提示词细节程度对大语言模型推理能力的影响 large language model
4 The Art of Scaling Test-Time Compute for Large Language Models 大规模语言模型测试时计算缩放策略的系统性对比研究 large language model
5 OPOR-Bench: Evaluating Large Language Models on Online Public Opinion Report Generation 提出OPOR-Bench基准测试,用于评估大语言模型在在线舆情报告生成任务中的表现 large language model
6 MMAG: Mixed Memory-Augmented Generation for Large Language Models Applications 提出MMAG框架,通过混合记忆增强提升大型语言模型在多轮交互中的连贯性和个性化 large language model
7 PromptBridge: Cross-Model Prompt Transfer for Large Language Models PromptBridge:一种用于大型语言模型的跨模型Prompt迁移框架 large language model
8 Beware of Reasoning Overconfidence: Pitfalls in the Reasoning Process for Multi-solution Tasks 针对多解任务,揭示LLM推理过程中的过度自信问题,并提出缓解策略。 large language model chain-of-thought
9 Think Before You Prune: Self-Reflective Structured Pruning for Reasoning Language Models 提出RESP自反思结构化剪枝框架,提升推理大模型在资源受限环境下的性能。 chain-of-thought
10 Four Over Six: More Accurate NVFP4 Quantization with Adaptive Block Scaling 提出Four Over Six (4/6)自适应块缩放NVFP4量化算法,提升大模型训练和推理精度。 large language model
11 Latent Debate: A Surrogate Framework for Interpreting LLM Thinking 提出Latent Debate框架,通过隐式辩论解释LLM的推理过程并检测幻觉。 large language model
12 BHRAM-IL: A Benchmark for Hallucination Recognition and Assessment in Multiple Indian Languages BHRAM-IL:多印度语言LLM幻觉识别与评估基准 large language model

🔬 支柱二:RL算法与架构 (RL & Architecture) (6 篇)

#题目一句话要点标签🔗
13 DyFuLM: An Advanced Multimodal Framework for Sentiment Analysis 提出DyFuLM,用于提升多模态情感分析中细粒度情感捕捉与表示能力 representation learning MAE multimodal
14 Beyond SFT: Reinforcement Learning for Safer Large Reasoning Models with Better Reasoning Ability 提出基于强化学习的安全大模型推理框架,提升安全性的同时保持推理能力 reinforcement learning large language model chain-of-thought
15 Rectifying LLM Thought from Lens of Optimization 提出RePro,通过优化视角提升LLM的思维链推理能力,缓解过度思考问题 reinforcement learning large language model chain-of-thought
16 MCAT: Scaling Many-to-Many Speech-to-Text Translation with MLLMs to 70 Languages MCAT:利用MLLM扩展多对多语音到文本翻译至70种语言 curriculum learning large language model multimodal
17 Lightweight Latent Reasoning for Narrative Tasks 提出LiteReason,通过轻量级潜在推理加速叙事任务中的强化学习,显著降低计算成本。 reinforcement learning large language model
18 Learning the Boundary of Solvability: Aligning LLMs to Detect Unsolvable Problems 提出UnsolvableQA数据集与UnsolvableRL框架,提升LLM对不可解问题的识别能力。 reinforcement learning large language model

⬅️ 返回 cs.CL 首页 · 🏠 返回主页