cs.CL(2025-10-12)

📊 共 19 篇论文 | 🔗 6 篇有代码

🎯 兴趣领域导航

支柱九:具身大模型 (Embodied Foundation Models) (16 🔗4) 支柱二:RL算法与架构 (RL & Architecture) (3 🔗2)

🔬 支柱九:具身大模型 (Embodied Foundation Models) (16 篇)

#题目一句话要点标签🔗
1 Multimodal Retrieval-Augmented Generation with Large Language Models for Medical VQA 提出基于检索增强生成(RAG)的通用大语言模型,用于医学VQA任务,提升临床决策支持。 large language model multimodal
2 BitMar: Low-Bit Multimodal Fusion with Episodic Memory for Edge Devices BitMar:面向边缘设备的低比特多模态融合与情景记忆模型 multimodal
3 Large Language Models for Full-Text Methods Assessment: A Case Study on Mediation Analysis 利用大型语言模型评估全文方法:以中介分析为例 large language model
4 Dynamic Topic Evolution with Temporal Decay and Attention in Large Language Models 提出基于时序衰减和注意力机制的大语言模型动态主题演化框架 large language model
5 Merlin's Whisper: Enabling Efficient Reasoning in Large Language Models via Black-box Persuasive Prompting 提出Whisper:通过黑盒诱导提示提升大语言模型推理效率 large language model
6 UltraLLaDA: Scaling the Context Length to 128K for Diffusion Large Language Models UltraLLaDA:通过后训练将扩散LLM的上下文长度扩展到128K。 large language model
7 Assessing Large Language Models for Structured Medical Order Extraction 利用通用大语言模型和少量样本实现结构化医疗医嘱抽取 large language model
8 STEAM: A Semantic-Level Knowledge Editing Framework for Large Language Models 提出STEAM框架,通过语义对齐提升大语言模型知识编辑的连贯性 large language model
9 DUAL-Bench: Measuring Over-Refusal and Robustness in Vision-Language Models 提出DUAL-Bench,用于评估视觉语言模型中的过度拒绝和鲁棒性问题。 multimodal
10 Is Implicit Knowledge Enough for LLMs? A RAG Approach for Tree-based Structures 提出基于RAG的树结构知识线性化方法,提升LLM在层级数据上的效率。 large language model
11 Preserving LLM Capabilities through Calibration Data Curation: From Analysis to Optimization 提出COLA框架,通过校准数据优化,提升压缩后大语言模型能力 large language model
12 Detecting Hallucinations in Authentic LLM-Human Interactions 提出AuthenHallu:首个基于真实LLM-人类交互的幻觉检测基准 large language model
13 FML-bench: A Benchmark for Automatic ML Research Agents Highlighting the Importance of Exploration Breadth FML-bench:一个用于评估自动机器学习研究代理的基准,强调探索广度的重要性。 large language model
14 NIM: Neuro-symbolic Ideographic Metalanguage for Inclusive Communication 提出NIM神经符号表意元语言,用于促进包容性交流 large language model
15 Do Audio LLMs Really LISTEN, or Just Transcribe? Measuring Lexical vs. Acoustic Emotion Cues Reliance 提出LISTEN基准以评估音频语言模型的情感理解能力 multimodal
16 Harnessing Consistency for Robust Test-Time LLM Ensemble 提出CoRE,利用一致性提升LLM集成在测试时的鲁棒性。 large language model

🔬 支柱二:RL算法与架构 (RL & Architecture) (3 篇)

#题目一句话要点标签🔗
17 RePro: Training Language Models to Faithfully Recycle the Web for Pretraining 提出RePro,通过强化学习训练语言模型,高效且忠实地复用Web数据进行预训练。 reinforcement learning large language model
18 VOLTAGE: A Versatile Contrastive Learning based OCR Methodology for ultra low-resource scripts through Auto Glyph Feature Extraction VOLTAGE:基于对比学习和自动字形特征提取的超低资源文字OCR方法 contrastive learning
19 RECON: Reasoning with Condensation for Efficient Retrieval-Augmented Generation RECON:通过冷凝推理提升检索增强生成效率,显著降低上下文长度并提高性能。 reinforcement learning distillation

⬅️ 返回 cs.CL 首页 · 🏠 返回主页