cs.CL(2024-08-21)

📊 共 19 篇论文 | 🔗 2 篇有代码

🎯 兴趣领域导航

支柱九:具身大模型 (Embodied Foundation Models) (15 🔗1) 支柱二:RL算法与架构 (RL & Architecture) (3 🔗1) 支柱一:机器人控制 (Robot Control) (1)

🔬 支柱九:具身大模型 (Embodied Foundation Models) (15 篇)

#题目一句话要点标签🔗
1 Cause-Aware Empathetic Response Generation via Chain-of-Thought Fine-Tuning 提出基于思维链微调的因果感知共情回复生成方法,提升LLM的情感理解能力。 large language model chain-of-thought
2 SarcasmBench: Towards Evaluating Large Language Models on Sarcasm Understanding 提出SarcasmBench以解决大语言模型在讽刺理解中的不足问题 large language model chain-of-thought
3 Large Language Models for Page Stream Segmentation 利用大型语言模型进行页面流分割,并提出增强型基准测试集TABME++ large language model multimodal
4 First Activations Matter: Training-Free Methods for Dynamic Activation in Large Language Models 提出免训练的阈值动态激活方法,提升大语言模型推理效率。 large language model
5 Large Language Models are Good Attackers: Efficient and Stealthy Textual Backdoor Attacks 提出EST-Bad:利用大语言模型实现高效隐蔽的文本后门攻击 large language model
6 MoE-LPR: Multilingual Extension of Large Language Models through Mixture-of-Experts with Language Priors Routing 提出MoE-LPR,通过混合专家模型和语言先验路由增强LLM的多语言能力并缓解遗忘问题。 large language model
7 Memorization in In-Context Learning 揭示ICL中记忆效应:探究记忆与下游任务性能的相关性 large language model
8 GeoReasoner: Reasoning On Geospatially Grounded Context For Natural Language Understanding 提出GeoReasoner,通过地理空间推理增强自然语言理解能力 large language model
9 Understanding Epistemic Language with a Language-augmented Bayesian Theory of Mind 提出LaBToM模型,通过语言增强的贝叶斯心智理论理解认知语言。 multimodal
10 Xinyu: An Efficient LLM-based System for Commentary Generation Xinyu:一种高效的基于LLM的中文评论生成系统,提升评论员效率并保证质量。 large language model
11 RAG-Optimized Tibetan Tourism LLMs: Enhancing Accuracy and Personalization 提出基于RAG优化的藏区旅游大语言模型,提升准确性和个性化推荐能力 large language model
12 WeQA: A Benchmark for Retrieval Augmented Generation in Wind Energy Domain 提出WeQA:风能领域检索增强生成基准,加速决策支持。 large language model
13 Against All Odds: Overcoming Typology, Script, and Language Confusion in Multilingual Embedding Inversion Attacks 揭示多语种嵌入反演攻击漏洞:字形、语系与语言混淆的影响 large language model
14 RAGLAB: A Modular and Research-Oriented Unified Framework for Retrieval-Augmented Generation RAGLAB:一个模块化、面向研究的检索增强生成统一框架 large language model
15 RedWhale: An Adapted Korean LLM Through Efficient Continual Pretraining RedWhale:一种通过高效持续预训练优化的韩语LLM large language model

🔬 支柱二:RL算法与架构 (RL & Architecture) (3 篇)

#题目一句话要点标签🔗
16 Practical token pruning for foundation models in few-shot conversational virtual assistant systems 针对少样本对话式虚拟助手,提出实用token剪枝加速基础模型推理。 contrastive learning distillation foundation model
17 Personality Alignment of Large Language Models 提出人格对齐方法,使大语言模型能根据用户个性化偏好生成内容 DPO large language model
18 LLM Pruning and Distillation in Practice: The Minitron Approach Minitron:通过剪枝和蒸馏将Llama 3.1和Mistral NeMo模型压缩至更小规模。 distillation

🔬 支柱一:机器人控制 (Robot Control) (1 篇)

#题目一句话要点标签🔗
19 Political Bias in LLMs: Unaligned Moral Values in Agent-centric Simulations 利用Agent模拟评估LLM政治倾向:揭示道德价值观与人类认知的偏差 manipulation

⬅️ 返回 cs.CL 首页 · 🏠 返回主页