cs.AI(2025-07-21)

📊 共 32 篇论文 | 🔗 3 篇有代码

🎯 兴趣领域导航

支柱九:具身大模型 (Embodied Foundation Models) (22 🔗2) 支柱二:RL算法与架构 (RL & Architecture) (9 🔗1) 支柱五:交互与反应 (Interaction & Reaction) (1)

🔬 支柱九:具身大模型 (Embodied Foundation Models) (22 篇)

#题目一句话要点标签🔗
1 MEETI: A Multimodal ECG Dataset from MIMIC-IV-ECG with Signals, Images, Features and Interpretations 提出MEETI数据集以解决多模态ECG分析的不足问题 large language model multimodal
2 Disentangling Homophily and Heterophily in Multimodal Graph Clustering 提出DMGC框架,解耦多模态图聚类中的同质性和异质性,实现更有效的聚类。 multimodal
3 Measuring and Analyzing Intelligence via Contextual Uncertainty in Large Language Models using Information-Theoretic Metrics 提出基于信息论度量的上下文不确定性分析方法,用于评估大型语言模型的智能水平。 large language model
4 StackTrans: From Large Language Model to Large Pushdown Automata Model 提出StackTrans,通过引入可学习栈结构增强LLM处理上下文无关文法的能力 large language model
5 SimdBench: Benchmarking Large Language Models for SIMD-Intrinsic Code Generation SimdBench:用于SIMD指令代码生成的大语言模型基准测试 large language model
6 Expert-Guided LLM Reasoning for Battery Discovery: From AI-Driven Hypothesis to Synthesis and Characterization ChatBattery:专家知识引导LLM推理,实现锂电池材料发现 large language model chain-of-thought
7 A Framework for Analyzing Abnormal Emergence in Service Ecosystems Through LLM-based Agent Intention Mining 提出EAMI框架,利用LLM进行服务生态系统中异常涌现的动态可解释分析。 large language model chain-of-thought
8 AutoMAT: A Hierarchical Framework for Autonomous Alloy Discovery AutoMAT:用于自主合金发现的分层框架,加速新合金设计。 large language model
9 SynthCTI: LLM-Driven Synthetic CTI Generation to enhance MITRE Technique Mapping 提出SynthCTI以解决CTI数据稀缺与不平衡问题 large language model
10 Solving Formal Math Problems by Decomposition and Iterative Reflection Delta Prover:利用通用LLM通过分解和迭代反思解决形式化数学问题 large language model
11 AI-Powered Commit Explorer (APCE) APCE:一个AI驱动的Commit信息探索工具,辅助开发者使用和研究LLM生成的Commit信息。 large language model
12 From Logic to Language: A Trust Index for Problem Solving with LLMs 提出基于信任指数的框架,评估LLM在自然语言问题求解中的质量。 large language model
13 The Other Mind: How Language Models Exhibit Human Temporal Cognition 揭示大语言模型中的人类时间认知:模型自发形成时间参照并遵循韦伯-费希纳定律。 large language model
14 Do AI models help produce verified bug fixes? 研究AI模型辅助程序修复的有效性,揭示LLM在调试中的实际作用与程序员行为模式。 large language model
15 Left Leaning Models: How AI Evaluates Economic Policy? 利用大型语言模型评估经济政策偏好:揭示AI的“左倾”倾向 large language model
16 Multi-Stage Prompt Inference Attacks on Enterprise LLM Systems 针对企业LLM系统的多阶段提示推理攻击与防御研究 large language model
17 Automated Visualization Makeovers with LLMs 利用多模态大语言模型实现数据可视化自动优化与改进 large language model
18 HAMLET: Hyperadaptive Agent-based Modeling for Live Embodied Theatrics HAMLET:用于实时具身戏剧的超自适应Agent建模框架 large language model
19 PiMRef: Detecting and Explaining Ever-evolving Spear Phishing Emails with Knowledge Base Invariants PiMRef:利用知识库不变性检测并解释不断演变的鱼叉式网络钓鱼邮件 large language model
20 Butterfly Effects in Toolchains: A Comprehensive Analysis of Failed Parameter Filling in LLM Tool-Agent Systems 构建LLM工具代理参数失败分类体系,分析输入源与失败模式关联并提出改进建议 large language model
21 IM-Chat: A Multi-agent LLM Framework Integrating Tool-Calling and Diffusion Modeling for Knowledge Transfer in Injection Molding Industry IM-Chat:集成工具调用和扩散模型的注塑行业多智能体LLM知识迁移框架 large language model
22 SPAR: Scholar Paper Retrieval with LLM-based Agents for Enhanced Academic Search SPAR:基于LLM Agent的学术论文检索框架,提升学术搜索效果 large language model

🔬 支柱二:RL算法与架构 (RL & Architecture) (9 篇)

#题目一句话要点标签🔗
23 Chart-R1: Chain-of-Thought Supervision and Reinforcement for Advanced Chart Reasoner 提出Chart-R1,通过思维链监督和强化学习提升图表推理能力 reinforcement learning multimodal chain-of-thought
24 AoI-Aware Resource Allocation with Deep Reinforcement Learning for HAPS-V2X Networks 提出基于深度强化学习的AoI感知资源分配方法,用于HAPS-V2X网络。 reinforcement learning deep reinforcement learning
25 LLM world models are mental: Output layer evidence of brittle world model use in LLM mechanical reasoning 利用认知科学方法评估LLM世界模型能力,揭示其机械推理的局限性 world model large language model
26 One Step is Enough: Multi-Agent Reinforcement Learning based on One-Step Policy Optimization for Order Dispatch on Ride-Sharing Platforms 提出基于单步策略优化的多智能体强化学习方法,解决网约车平台订单分配问题 reinforcement learning PPO
27 Hierarchical Budget Policy Optimization for Adaptive Reasoning 提出层级预算策略优化(HBPO),提升大模型自适应推理效率与精度。 reinforcement learning chain-of-thought
28 LAPO: Internalizing Reasoning Efficiency via Length-Adaptive Policy Optimization 提出LAPO,通过长度自适应策略优化提升推理效率并降低token消耗。 reinforcement learning chain-of-thought
29 Optimal Transceiver Design in Over-the-Air Federated Distillation 提出空口联邦蒸馏框架,优化收发器设计以提升联邦学习收敛速度。 distillation
30 Data-Efficient Safe Policy Improvement Using Parametric Structure 利用参数结构,提升安全策略改进的离线强化学习数据效率 reinforcement learning offline reinforcement learning
31 RAD: Retrieval High-quality Demonstrations to Enhance Decision-making RAD:通过检索高质量示范增强离线强化学习决策能力 reinforcement learning offline reinforcement learning

🔬 支柱五:交互与反应 (Interaction & Reaction) (1 篇)

#题目一句话要点标签🔗
32 Winning Gold at IMO 2025 with a Model-Agnostic Verification-and-Refinement Pipeline 提出模型无关的验证与精炼流程,在IMO 2025问题上取得突破 IMoS large language model

⬅️ 返回 cs.AI 首页 · 🏠 返回主页