cs.CL(2025-07-26)

📊 共 14 篇论文 | 🔗 3 篇有代码

🎯 兴趣领域导航

支柱九:具身大模型 (Embodied Foundation Models) (11 🔗3) 支柱二:RL算法与架构 (RL & Architecture) (3)

🔬 支柱九:具身大模型 (Embodied Foundation Models) (11 篇)

#题目一句话要点标签🔗
1 Text2Vis: A Challenging and Diverse Benchmark for Generating Multimodal Visualizations from Text Text2Vis:提出一个具有挑战性和多样性的基准,用于从文本生成多模态可视化。 large language model multimodal
2 TurQUaz at CheckThat! 2025: Debating Large Language Models for Scientific Web Discourse Detection 提出基于LLM辩论的科学网络文本判别方法,用于识别科学研究引用 large language model
3 Zero-shot Performance of Generative AI in Brazilian Portuguese Medical Exam 评估生成式AI在巴西葡萄牙语医学考试中的零样本表现,揭示语言差异与多模态推理挑战。 large language model multimodal
4 RAG in the Wild: On the (In)effectiveness of LLMs with Mixture-of-Knowledge Retrieval Augmentation 揭示RAG在混合知识检索场景下的局限性,强调自适应检索策略的重要性 large language model
5 FECT: Factuality Evaluation of Interpretive AI-Generated Claims in Contact Center Conversation Transcripts 提出FECT基准数据集,用于评估AI在联络中心对话转录中生成解释性声明的真实性 large language model
6 HCAttention: Extreme KV Cache Compression via Heterogeneous Attention Computing for LLMs 提出HCAttention以解决大语言模型KV缓存压缩问题 large language model
7 FAEDKV: Infinite-Window Fourier Transform for Unbiased KV Cache Compression FAEDKV:用于无偏KV缓存压缩的无限窗傅里叶变换 large language model
8 Exploring LLM Autoscoring Reliability in Large-Scale Writing Assessments Using Generalizability Theory 利用可推广性理论评估LLM在大规模写作评估中的自动评分可靠性 large language model
9 KLAAD: Refining Attention Mechanisms to Reduce Societal Bias in Generative Language Models KLAAD:通过优化注意力机制减少生成语言模型中的社会偏见 large language model
10 CaliDrop: KV Cache Compression with Calibration CaliDrop:通过校准增强KV缓存压缩中的Token Eviction策略,提升长文本场景下的LLM性能。 large language model
11 Flora: Effortless Context Construction to Arbitrary Length and Scale Flora:一种无需人工干预的任意长度和规模长文本上下文构建方法 large language model

🔬 支柱二:RL算法与架构 (RL & Architecture) (3 篇)

#题目一句话要点标签🔗
12 UloRL:An Ultra-Long Output Reinforcement Learning Approach for Advancing Large Language Models' Reasoning Abilities UloRL:一种超长输出强化学习方法,提升大型语言模型的推理能力 reinforcement learning large language model
13 JT-Math: A Multi-Stage Framework for Advanced Mathematical Reasoning in Large Language Models JT-Math:一个用于大语言模型高级数学推理的多阶段框架 reinforcement learning large language model chain-of-thought
14 Basic Reading Distillation 提出基本阅读蒸馏以提升小模型的自然语言处理能力 distillation large language model

⬅️ 返回 cs.CL 首页 · 🏠 返回主页