cs.CL(2024-05-26)

📊 共 16 篇论文 | 🔗 5 篇有代码

🎯 兴趣领域导航

支柱九:具身大模型 (Embodied Foundation Models) (10 🔗4) 支柱二:RL算法与架构 (RL & Architecture) (5 🔗1) 支柱一:机器人控制 (Robot Control) (1)

🔬 支柱九:具身大模型 (Embodied Foundation Models) (10 篇)

#题目一句话要点标签🔗
1 Let Silence Speak: Enhancing Fake News Detection with Generated Comments from Large Language Models GenFEND:利用大语言模型生成评论,提升假新闻检测效果 large language model
2 SED: Self-Evaluation Decoding Enhances Large Language Models for Better Generation 提出自评估解码(SED)方法,提升大语言模型在不确定性token处的生成质量。 large language model
3 Assessing Empathy in Large Language Models with Real-World Physician-Patient Interactions 利用真实医患交互数据,评估大型语言模型在共情能力上的表现 large language model
4 Accurate and Nuanced Open-QA Evaluation Through Textual Entailment 提出基于文本蕴含的开放域问答评估方法,提升评估准确性和细粒度。 large language model foundation model
5 Adaptive Activation Steering: A Tuning-Free LLM Truthfulness Improvement Method for Diverse Hallucinations Categories 提出自适应激活引导(ACT)方法,无需微调即可提升LLM的真实性,解决幻觉问题。 large language model
6 Large Scale Knowledge Washing 提出LAW,通过更新MLP层实现大规模语言模型知识擦除,保持推理能力。 large language model
7 A Preliminary Empirical Study on Prompt-based Unsupervised Keyphrase Extraction 研究基于Prompt的无监督关键词抽取方法,揭示Prompt设计对性能的影响 large language model
8 Cocktail: A Comprehensive Information Retrieval Benchmark with LLM-Generated Documents Integration 提出Cocktail,一个综合信息检索基准,集成LLM生成的文档,用于评估混合数据源下的检索模型。 large language model
9 Tool Learning in the Wild: Empowering Language Models as Automatic Tool Agents AutoTools:赋能语言模型作为自动工具代理,实现工具学习 large language model
10 CPsyCoun: A Report-based Multi-turn Dialogue Reconstruction and Evaluation Framework for Chinese Psychological Counseling CPsyCoun:一个面向中文心理咨询的基于报告的多轮对话重建与评估框架 large language model

🔬 支柱二:RL算法与架构 (RL & Architecture) (5 篇)

#题目一句话要点标签🔗
11 Multi-Reference Preference Optimization for Large Language Models 提出多参考偏好优化(MRPO)算法,提升大语言模型对人类意图的对齐能力 reinforcement learning preference learning DPO
12 Triple Preference Optimization: Achieving Better Alignment using a Single Step Optimization 提出三重偏好优化(TPO),通过单步优化提升LLM的推理和指令遵循能力。 reinforcement learning preference learning RLHF
13 M-RAG: Reinforcing Large Language Model Performance through Retrieval-Augmented Generation with Multiple Partitions 提出M-RAG多分区检索增强生成框架,提升LLM在多任务上的性能 reinforcement learning large language model
14 RLSF: Fine-tuning LLMs via Symbolic Feedback RLSF:通过符号反馈微调大语言模型,提升领域推理和逻辑对齐能力 reinforcement learning large language model
15 Automatically Generating Numerous Context-Driven SFT Data for LLMs across Diverse Granularity 提出AugCon,自动生成多粒度上下文驱动的SFT数据,提升LLM微调效果。 contrastive learning large language model

🔬 支柱一:机器人控制 (Robot Control) (1 篇)

#题目一句话要点标签🔗
16 MentalManip: A Dataset For Fine-grained Analysis of Mental Manipulation in Conversations 提出MentalManip数据集,用于细粒度分析对话中的精神操控现象 manipulation

⬅️ 返回 cs.CL 首页 · 🏠 返回主页