cs.CL(2025-12-24)

📊 共 20 篇论文 | 🔗 1 篇有代码

🎯 兴趣领域导航

支柱九:具身大模型 (Embodied Foundation Models) (13) 支柱二:RL算法与架构 (RL & Architecture) (5 🔗1) 支柱三:空间感知与语义 (Perception & Semantics) (1) 支柱四:生成式动作 (Generative Motion) (1)

🔬 支柱九:具身大模型 (Embodied Foundation Models) (13 篇)

#题目一句话要点标签🔗
1 Morality is Contextual: Learning Interpretable Moral Contexts from Human Data with Probabilistic Clustering and Large Language Models COMETH框架通过概率聚类和LLM学习可解释的道德上下文,提升道德判断准确性。 large language model
2 Semi-Supervised Learning for Large Language Models Safety and Content Moderation 提出半监督学习方法,提升大语言模型安全与内容审核能力 large language model
3 Neural Probe-Based Hallucination Detection for Large Language Models 提出基于神经探针的大语言模型幻觉检测框架,提升低误报下的检测精度。 large language model
4 ClarifyMT-Bench: Benchmarking and Improving Multi-Turn Clarification for Conversational Large Language Models 提出ClarifyMT-Bench,用于评测和提升会话大语言模型的多轮澄清能力。 large language model
5 Foundation Model-based Evaluation of Neuropsychiatric Disorders: A Lifespan-Inclusive, Multi-Modal, and Multi-Lingual Study 提出基于大模型的神经精神疾病评估框架FEND,实现多模态、多语言和全生命周期的诊断。 foundation model
6 Evaluating Novelty in AI-Generated Research Plans Using Multi-Workflow LLM Pipelines 利用多工作流LLM评估AI生成研究计划的新颖性 large language model multimodal
7 Rethinking Supervised Fine-Tuning: Emphasizing Key Answer Tokens for Improved LLM Accuracy SFTKey:通过强化关键答案token优化LLM监督微调的准确性 large language model chain-of-thought
8 Reflection Pretraining Enables Token-Level Self-Correction in Biological Sequence Models 提出反射预训练,使生物序列模型具备token级自纠错能力 large language model chain-of-thought
9 ReaSeq: Unleashing World Knowledge via Reasoning for Sequential Modeling ReaSeq:通过推理释放世界知识,用于序列建模,提升推荐系统性能。 large language model chain-of-thought
10 Teaching People LLM's Errors and Getting it Right 研究LLM错误模式教学,提升用户识别LLM失效场景的能力 large language model
11 C2LLM Technical Report: A New Frontier in Code Retrieval via Adaptive Cross-Attention Pooling C2LLM:通过自适应跨注意力池化实现代码检索的新突破 large language model
12 LLM_annotate: A Python package for annotating and analyzing fiction characters LLM_annotate:用于小说人物分析的Python工具包,提升标注和分析效率。 large language model
13 Architectural Trade-offs in Small Language Models Under Compute Constraints 研究计算约束下小型语言模型架构权衡,揭示不同架构和训练预算对性能的影响 large language model

🔬 支柱二:RL算法与架构 (RL & Architecture) (5 篇)

#题目一句话要点标签🔗
14 Distilling the Essence: Efficient Reasoning Distillation via Sequence Truncation 提出基于序列截断的推理蒸馏方法,提升小模型推理效率 distillation chain-of-thought
15 MultiMind at SemEval-2025 Task 7: Crosslingual Fact-Checked Claim Retrieval via Multi-Source Alignment TriAligner:通过多源对齐实现跨语言的事实验证声明检索 representation learning contrastive learning large language model
16 Where Did This Sentence Come From? Tracing Provenance in LLM Reasoning Distillation 提出推理蒸馏溯源框架,分析学生模型能力来源并指导数据选择。 teacher-student distillation
17 Semantic Refinement with LLMs for Graph Representations 提出DAS框架,利用LLM进行图表示语义增强,解决图结构异构性问题 representation learning large language model
18 NVIDIA Nemotron 3: Efficient and Open Intelligence NVIDIA Nemotron 3:高效开放的智能模型家族,支持百万token上下文 reinforcement learning Mamba

🔬 支柱三:空间感知与语义 (Perception & Semantics) (1 篇)

#题目一句话要点标签🔗
19 Decoding Predictive Inference in Visual Language Processing via Spatiotemporal Neural Coherence 提出基于时空神经相干性的视觉语言处理预测推理解码框架 optical flow spatiotemporal multimodal

🔬 支柱四:生成式动作 (Generative Motion) (1 篇)

#题目一句话要点标签🔗
20 Optimizing Decoding Paths in Masked Diffusion Models by Quantifying Uncertainty 通过量化不确定性优化掩蔽扩散模型的解码路径 MDM

⬅️ 返回 cs.CL 首页 · 🏠 返回主页