cs.CL(2026-03-23)

📊 共 10 篇论文 | 🔗 2 篇有代码

🎯 兴趣领域导航

支柱九:具身大模型 (Embodied Foundation Models) (6 🔗1) 支柱二:RL算法与架构 (RL & Architecture) (4 🔗1)

🔬 支柱九:具身大模型 (Embodied Foundation Models) (6 篇)

#题目一句话要点标签🔗
1 SemEval-2026 Task 12: Abductive Event Reasoning: Towards Real-World Event Causal Inference for Large Language Models SemEval-2026任务12:提出一个基于证据的归纳事件推理基准,用于评估大语言模型在现实世界事件因果推断中的能力。 large language model
2 DATASHI: A Parallel English-Tashlhiyt Corpus for Orthography Normalization and Low-Resource Language Processing 构建平行英-塔什利特语料库DATASHI,用于正字法标准化和低资源语言处理。 large language model multimodal
3 TaigiSpeech: A Low-Resource Real-World Speech Intent Dataset and Preliminary Results with Scalable Data Mining In-the-Wild 提出TaigiSpeech:一个低资源闽南语真实语音意图数据集,并探索可扩展的数据挖掘方法。 multimodal
4 Probing How Scalable Table Data Enhances General Long-Context Reasoning TableLong:利用可扩展表格数据增强大语言模型的长文本推理能力 large language model
5 Optimizing Multi-Agent Weather Captioning via Text Gradient Descent: A Training-Free Approach with Consensus-Aware Gradient Fusion 提出WeatherTGD,一种基于文本梯度下降的免训练多智能体天气描述优化框架 large language model
6 Generalizable Self-Evolving Memory for Automatic Prompt Optimization 提出MemAPO,通过自进化记忆实现大语言模型自动提示优化,提升泛化能力。 large language model

🔬 支柱二:RL算法与架构 (RL & Architecture) (4 篇)

#题目一句话要点标签🔗
7 Dual-Space Knowledge Distillation with Key-Query Matching for Large Language Models with Vocabulary Mismatch 提出DSKD-CMA-GA,通过生成对抗学习解决LLM蒸馏中词表不匹配问题。 distillation large language model
8 DRTriton: Large-Scale Synthetic Data Reinforcement Learning for Triton Kernel Generation DRTriton:利用大规模合成数据强化学习生成Triton内核,显著提升CUDA内核效率。 reinforcement learning large language model
9 TAMTRL: Teacher-Aligned Reward Reshaping for Multi-Turn Reinforcement Learning in Long-Context Compression 提出TAMTRL,通过教师对齐奖励重塑解决长文本压缩中的多轮强化学习问题。 reinforcement learning large language model
10 Gumbel Distillation for Parallel Text Generation 提出Gumbel蒸馏,提升并行文本生成模型质量,缩小与自回归模型的差距 distillation

⬅️ 返回 cs.CL 首页 · 🏠 返回主页