cs.CL(2024-12-20)

📊 共 27 篇论文 | 🔗 3 篇有代码

🎯 兴趣领域导航

支柱九:具身大模型 (Embodied Foundation Models) (24 🔗3) 支柱二:RL算法与架构 (RL & Architecture) (3)

🔬 支柱九:具身大模型 (Embodied Foundation Models) (24 篇)

#题目一句话要点标签🔗
1 A Breadth-First Catalog of Text Processing, Speech Processing and Multimodal Research in South Asian Languages 针对南亚低资源语言,提出基于LLM的文本、语音和多模态研究综述方法。 large language model multimodal
2 HREF: Human Response-Guided Evaluation of Instruction Following in Language Models HREF:提出基于人类回复指导的指令跟随语言模型评估方法,解决现有评估偏差问题。 large language model instruction following
3 Critique of Impure Reason: Unveiling the reasoning behaviour of medical Large Language Models 剖析医学大语言模型的推理行为,提升医疗AI透明度与可信度 large language model
4 The Only Way is Ethics: A Guide to Ethical Research with Large Language Models 构建LLM伦理白皮书:为NLP从业者提供LLM伦理研究的实用指南 large language model
5 Humanlike Cognitive Patterns as Emergent Phenomena in Large Language Models 综述性研究:大型语言模型涌现类人认知模式的系统性分析 large language model
6 TL-Training: A Task-Feature-Based Framework for Training Large Language Models in Tool Use TL-Training:一种基于任务特征的工具使用大语言模型训练框架 large language model
7 Continual Learning Using Only Large Language Model Prompting 提出CLOB:一种仅使用大语言模型提示的持续学习范式 large language model
8 Logical Consistency of Large Language Models in Fact-checking 提出逻辑一致性评测基准,用于评估和提升大语言模型在事实核查中处理复杂逻辑查询的能力。 large language model
9 Ensembling Large Language Models with Process Reward-Guided Tree Search for Better Complex Reasoning 提出LE-MCTS,通过过程奖励引导的树搜索集成大语言模型,提升复杂推理能力。 large language model
10 Error-driven Data-efficient Large Multimodal Model Tuning 提出一种误差驱动的数据高效调优框架,用于提升大型多模态模型在下游任务上的性能。 multimodal
11 KRAIL: A Knowledge-Driven Framework for Base Human Reliability Analysis Integrating IDHEAS and Large Language Models 提出KRAIL框架,融合IDHEAS和LLM,半自动化计算人因可靠性分析中的基本人为差错概率。 large language model
12 Mitigating Social Bias in Large Language Models: A Multi-Objective Approach within a Multi-Agent Framework 提出MOMA多智能体框架,在降低大语言模型社会偏见的同时维持性能。 large language model
13 Deliberative Alignment: Reasoning Enables Safer Language Models 提出审慎对齐方法,通过推理提升语言模型安全性 chain-of-thought
14 In-context Continual Learning Assisted by an External Continual Learner 提出InCA,利用外部持续学习器辅助上下文学习,实现可扩展的无灾难性遗忘持续学习。 large language model
15 Data-Centric Improvements for Enhancing Multi-Modal Understanding in Spoken Conversation Modeling 提出数据中心的多任务学习方法,提升口语对话建模中的多模态理解能力 multimodal
16 On the Suitability of pre-trained foundational LLMs for Analysis in German Legal Education 评估预训练LLM在德国法律教育分析中的适用性,并提出RAG改进方法。 chain-of-thought
17 TelcoLM: collecting data, adapting, and benchmarking language models for the telecommunication domain TelcoLM:构建、适配和评测电信领域专用语言模型 large language model
18 XRAG: eXamining the Core -- Benchmarking Foundational Components in Advanced Retrieval-Augmented Generation XRAG:评估增强生成中基础组件的基准测试框架,诊断并优化RAG系统。 large language model
19 Fearful Falcons and Angry Llamas: Emotion Category Annotations of Arguments by Humans and LLMs 利用众包和LLM标注论证中的情感类别,提升情感分析的细粒度。 chain-of-thought
20 Can Input Attributions Explain Inductive Reasoning in In-Context Learning? 研究输入归因方法能否解释大语言模型上下文学习中的归纳推理 large language model
21 Don't Do RAG: When Cache-Augmented Generation is All You Need for Knowledge Tasks 针对知识密集型任务,提出缓存增强生成(CAG)替代RAG,提升效率并降低复杂度。 large language model
22 Dynamic Label Name Refinement for Few-Shot Dialogue Intent Classification 提出动态标签优化方法,解决少样本对话意图分类中语义混淆问题 large language model
23 Template-Driven LLM-Paraphrased Framework for Tabular Math Word Problem Generation 提出TeLL框架,利用模板驱动和LLM复述生成高质量表格数学应用题 large language model
24 NeSyCoCo: A Neuro-Symbolic Concept Composer for Compositional Generalization NeSyCoCo:一种神经符号概念组合器,用于解决组合泛化问题 large language model

🔬 支柱二:RL算法与架构 (RL & Architecture) (3 篇)

#题目一句话要点标签🔗
25 From General to Specific: Tailoring Large Language Models for Personalized Healthcare 提出PMLM个性化医疗语言模型,解决LLM在医疗领域缺乏个体化服务的问题 reinforcement learning large language model
26 Contrastive Learning for Task-Independent SpeechLLM-Pretraining 提出基于对比学习的任务无关语音LLM预训练方法,提升语音处理任务性能 contrastive learning large language model
27 BabyHGRN: Exploring RNNs for Sample-Efficient Training of Language Models BabyHGRN:探索RNN在低资源语言模型训练中的高效性 Mamba distillation

⬅️ 返回 cs.CL 首页 · 🏠 返回主页