cs.AI(2025-03-07)

📊 共 16 篇论文 | 🔗 3 篇有代码

🎯 兴趣领域导航

支柱九:具身大模型 (Embodied Foundation Models) (13 🔗2) 支柱二:RL算法与架构 (RL & Architecture) (3 🔗1)

🔬 支柱九:具身大模型 (Embodied Foundation Models) (13 篇)

#题目一句话要点标签🔗
1 The Society of HiveMind: Multi-Agent Optimization of Foundation Model Swarms to Unlock the Potential of Collective Intelligence 提出基于多智能体优化的HiveMind框架,提升基础模型集群的集体智能 large language model foundation model
2 Ontology Generation using Large Language Models 提出两种Prompting方法,利用大语言模型自动生成高质量OWL本体 large language model
3 Evaluating Large Language Models in Code Generation: INFINITE Methodology for Defining the Inference Index 提出INFINITE方法,评估大语言模型在代码生成中的推理性能 large language model
4 Towards Understanding the Use of MLLM-Enabled Applications for Visual Interpretation by Blind and Low Vision People 利用MLLM的视觉解释应用提升盲人和低视力人群的日常体验 large language model multimodal
5 AVA: Attentive VLM Agent for Mastering StarCraft II 提出AVA:一个基于视觉语言模型且具有注意力机制的星际争霸II智能体 foundation model multimodal
6 Enhancing Reasoning with Collaboration and Memory 提出基于协作和记忆增强的大语言模型推理框架,提升复杂推理任务性能 chain-of-thought
7 TPU-Gen: LLM-Driven Custom Tensor Processing Unit Generator TPU-Gen:基于LLM的定制张量处理器生成器,提升设计效率和性能。 large language model
8 Cognitive Bias Detection Using Advanced Prompt Engineering 提出一种基于高级Prompt工程的认知偏差检测方法,提升用户生成内容的客观性。 large language model
9 LLM-based Iterative Approach to Metamodeling in Automotive 提出基于LLM的迭代式元模型构建方法,应用于汽车领域。 large language model
10 Accelerating Earth Science Discovery via Multi-Agent LLM Systems 利用多智能体LLM系统加速地球科学发现 large language model
11 Static Program Analysis Guided LLM Based Unit Test Generation 提出静态程序分析引导的LLM单元测试生成方法,提升测试代码质量。 large language model
12 WritingBench: A Comprehensive Benchmark for Generative Writing 提出WritingBench,一个全面的生成式写作评估基准,并提出查询相关的评估框架。 large language model
13 PromptPex: Automatic Test Generation for Language Model Prompts PromptPex:一种用于语言模型提示的自动测试生成工具 large language model

🔬 支柱二:RL算法与架构 (RL & Architecture) (3 篇)

#题目一句话要点标签🔗
14 R1-Searcher: Incentivizing the Search Capability in LLMs via Reinforcement Learning R1-Searcher:通过强化学习激励LLM的搜索能力,提升知识密集型任务性能 reinforcement learning distillation large language model
15 R1-Zero's "Aha Moment" in Visual Reasoning on a 2B Non-SFT Model R1-Zero在2B非SFT模型上成功复现视觉推理的“顿悟”现象 reinforcement learning large language model multimodal
16 Path Pooling: Training-Free Structure Enhancement for Efficient Knowledge Graph Retrieval-Augmented Generation 提出Path Pooling,一种免训练的结构增强方法,用于高效知识图谱检索增强生成 representation learning large language model

⬅️ 返回 cs.AI 首页 · 🏠 返回主页