cs.LG(2025-04-10)

📊 共 16 篇论文 | 🔗 1 篇有代码

🎯 兴趣领域导航

支柱九:具身大模型 (Embodied Foundation Models) (7 🔗1) 支柱二:RL算法与架构 (RL & Architecture) (7) 支柱一:机器人控制 (Robot Control) (1) 支柱五:交互与反应 (Interaction & Reaction) (1)

🔬 支柱九:具身大模型 (Embodied Foundation Models) (7 篇)

#题目一句话要点标签🔗
1 GPT Carry-On: Training Foundation Model for Customization Could Be Simple, Scalable and Affordable GPT Carry-On:提出一种简单、可扩展且经济高效的LLM定制化训练框架 foundation model chain-of-thought
2 C3PO: Critical-Layer, Core-Expert, Collaborative Pathway Optimization for Test-Time Expert Re-Mixing 提出C3PO以解决MoE模型测试时专家路径优化问题 large language model
3 Robust Hallucination Detection in LLMs via Adaptive Token Selection 提出HaMI,通过自适应Token选择实现LLM中更鲁棒的幻觉检测。 large language model
4 Using LLMs for Analyzing AIS Data 探索LLM在AIS数据分析中的应用,提出四种方法并分析其优劣 large language model
5 Apt-Serve: Adaptive Request Scheduling on Hybrid Cache for Scalable LLM Inference Serving Apt-Serve:面向LLM推理服务,提出混合缓存和自适应调度以提升有效吞吐量。 large language model
6 LoRI: Reducing Cross-Task Interference in Multi-Task Low-Rank Adaptation 提出LoRI,通过随机投影和稀疏化降低多任务低秩适配中的任务间干扰。 large language model
7 Conditional Data Synthesis Augmentation 提出条件数据合成增强(CoDSA),利用生成模型提升各类数据模态下的模型性能。 multimodal

🔬 支柱二:RL算法与架构 (RL & Architecture) (7 篇)

#题目一句话要点标签🔗
8 Deep Reinforcement Learning for Day-to-day Dynamic Tolling in Tradable Credit Schemes 提出基于深度强化学习的交易信用计划动态收费方法,优化交通拥堵。 reinforcement learning deep reinforcement learning
9 VL-Rethinker: Incentivizing Self-Reflection of Vision-Language Models with Reinforcement Learning VL-Rethinker:利用强化学习激励视觉语言模型进行自我反思,提升复杂推理能力 reinforcement learning distillation multimodal
10 Rethinking the Foundations for Continual Reinforcement Learning 重新审视持续强化学习的基础理论,提出基于历史过程的新形式化框架 reinforcement learning
11 A Relative Ignorability Framework for Decision-Relevant Observability in Control Theory and Reinforcement Learning 提出相对可忽略性框架,解决控制理论和强化学习中决策相关可观测性问题 reinforcement learning
12 ms-Mamba: Multi-scale Mamba for Time-Series Forecasting 提出ms-Mamba,一种用于时间序列预测的多尺度Mamba架构。 Mamba
13 Distilling Knowledge from Heterogeneous Architectures for Semantic Segmentation 提出HeteroAKD,用于异构架构语义分割知识蒸馏,提升学生模型性能。 teacher-student distillation
14 Echo Chamber: RL Post-training Amplifies Behaviors Learned in Pretraining RL后训练放大预训练行为,揭示数学推理模型训练偏差与泛化特性 reinforcement learning PPO

🔬 支柱一:机器人控制 (Robot Control) (1 篇)

#题目一句话要点标签🔗
15 Fast Adaptation with Behavioral Foundation Models 提出基于行为基础模型的快速自适应策略,提升零样本强化学习性能。 locomotion reinforcement learning foundation model

🔬 支柱五:交互与反应 (Interaction & Reaction) (1 篇)

#题目一句话要点标签🔗
16 Privacy-Preserving Vertical K-Means Clustering 提出基于同态加密和差分隐私的纵向K-Means聚类方法,降低通信复杂度并保护隐私。 OMOMO

⬅️ 返回 cs.LG 首页 · 🏠 返回主页