cs.CL(2025-04-28)

📊 共 27 篇论文 | 🔗 3 篇有代码

🎯 兴趣领域导航

支柱九:具身大模型 (Embodied Foundation Models) (22 🔗2) 支柱二:RL算法与架构 (RL & Architecture) (5 🔗1)

🔬 支柱九:具身大模型 (Embodied Foundation Models) (22 篇)

#题目一句话要点标签🔗
1 Multimodal Conditioned Diffusive Time Series Forecasting 提出多模态条件扩散模型MCD-TSF,用于融合时间戳和文本信息的时间序列预测。 multimodal TAMP
2 A Multimodal Pipeline for Clinical Data Extraction: Applying Vision-Language Models to Scans of Transfusion Reaction Reports 提出多模态流水线,利用视觉-语言模型从输血反应报告扫描件中提取临床数据。 multimodal
3 MDD-LLM: Towards Accuracy Large Language Models for Major Depressive Disorder Diagnosis 提出MDD-LLM,利用微调大语言模型提升重度抑郁症诊断准确率。 large language model
4 BRIDGE: Benchmarking Large Language Models for Understanding Real-world Clinical Practice Text 提出BRIDGE:用于评估大型语言模型在真实临床文本理解能力的多语言基准 large language model
5 Enhancing Systematic Reviews with Large Language Models: Using GPT-4 and Kimi 利用GPT-4和Kimi增强系统性综述:评估LLM在代码生成中的表现 large language model
6 Can a Crow Hatch a Falcon? Lineage Matters in Predicting Large Language Model Performance 提出 lineage-regularized 矩阵分解框架,利用模型谱系关系预测大语言模型性能。 large language model
7 Systematic Bias in Large Language Models: Discrepant Response Patterns in Binary vs. Continuous Judgment Tasks 揭示大语言模型在二元与连续判断任务中的系统性偏差 large language model
8 Large Language Models are Qualified Benchmark Builders: Rebuilding Pre-Training Datasets for Advancing Code Intelligence Tasks 利用大语言模型重建预训练数据集,提升代码智能任务性能 large language model
9 LLM-Assisted Automated Deductive Coding of Dialogue Data: Leveraging Dialogue-Specific Characteristics to Enhance Contextual Understanding 提出LLM辅助的对话数据自动演绎编码框架,提升上下文理解精度 large language model chain-of-thought
10 Context Selection and Rewriting for Video-based Educational Question Generation 提出基于上下文选择和重写的视频教育问答生成框架,提升问题质量和相关性。 large language model TAMP
11 AutoJudge: Judge Decoding Without Manual Annotation AutoJudge:无需人工标注的LLM推断加速方法,提升解码效率 large language model
12 Better To Ask in English? Evaluating Factual Accuracy of Multilingual LLMs in English and Low-Resource Languages 评估多语言LLM在英语和低资源语言中的事实准确性,揭示英语提问更可靠 large language model
13 semi-PD: Towards Efficient LLM Serving via Phase-Wise Disaggregated Computation and Unified Storage semi-PD:面向高效LLM服务的阶段解耦计算与统一存储 large language model
14 Taming the Titans: A Survey of Efficient LLM Inference Serving 综述高效LLM推理服务:探索实例、集群层面优化及新兴场景方案 large language model
15 Annif at SemEval-2025 Task 5: Traditional XMTC augmented by LLMs Annif结合传统XMTC与LLM,提升多语言环境下主题标引的准确性和效率 large language model
16 TD-EVAL: Revisiting Task-Oriented Dialogue Evaluation by Combining Turn-Level Precision with Dialogue-Level Comparisons TD-EVAL:结合Turn级精度与Dialogue级比较,重新审视面向任务型对话的评估 large language model
17 Learning to Plan Before Answering: Self-Teaching LLMs to Learn Abstract Plans for Problem Solving 提出LEPA:通过自训练LLM学习抽象计划以解决问题 large language model
18 Towards Long Context Hallucination Detection 提出长文本幻觉检测数据集与分解聚合模型,显著提升检测性能。 large language model
19 Mem0: Building Production-Ready AI Agents with Scalable Long-Term Memory Mem0:构建具备可扩展长期记忆的生产级AI Agent large language model
20 Moral Reasoning Across Languages: The Critical Role of Low-Resource Languages in LLMs 提出多语言道德推理基准MMRB,揭示低资源语言在LLM中的关键作用 large language model
21 Coreference Resolution for Vietnamese Narrative Texts 针对越南语叙事文本,提出并评估了基于大型语言模型的共指消解方法。 large language model
22 Detecting Effects of AI-Mediated Communication on Language Complexity and Sentiment 通过分析语言复杂度和情感变化,检测AI介导的交流对社交媒体的影响 large language model

🔬 支柱二:RL算法与架构 (RL & Architecture) (5 篇)

#题目一句话要点标签🔗
23 Knowledge-Driven Agentic Scientific Corpus Distillation Framework for Biomedical Large Language Models Training 提出知识驱动的Agentic科学语料蒸馏框架,用于生物医学大语言模型训练 distillation large language model
24 Knowledge Distillation of Domain-adapted LLMs for Question-Answering in Telecom 针对电信领域问答,研究领域自适应LLM的知识蒸馏策略 distillation large language model
25 VCM: Vision Concept Modeling Based on Implicit Contrastive Learning with Vision-Language Instruction Fine-Tuning 提出VCM:基于隐式对比学习和视觉-语言指令微调的视觉概念建模框架 contrastive learning
26 Toward Evaluative Thinking: Meta Policy Optimization with Evolving Reward Models 提出MPO元策略优化框架,通过动态演化的奖励模型提升LLM对齐的鲁棒性 reward design large language model
27 GenCLS++: Pushing the Boundaries of Generative Classification in LLMs Through Comprehensive SFT and RL Studies Across Diverse Datasets GenCLS++:通过综合SFT和RL研究,突破LLM生成式分类的界限 reinforcement learning large language model

⬅️ 返回 cs.CL 首页 · 🏠 返回主页