cs.LG(2024-07-29)

📊 共 11 篇论文 | 🔗 2 篇有代码

🎯 兴趣领域导航

支柱二:RL算法与架构 (RL & Architecture) (7 🔗2) 支柱九:具身大模型 (Embodied Foundation Models) (3) 支柱八:物理动画 (Physics-based Animation) (1)

🔬 支柱二:RL算法与架构 (RL & Architecture) (7 篇)

#题目一句话要点标签🔗
1 Dataset Distillation for Offline Reinforcement Learning 提出用于离线强化学习的数据集蒸馏方法,提升策略训练效果。 reinforcement learning offline reinforcement learning distillation
2 Diffusion-DICE: In-Sample Diffusion Guidance for Offline Reinforcement Learning 提出Diffusion-DICE,利用扩散模型和DICE方法解决离线强化学习中的策略优化问题。 reinforcement learning offline RL offline reinforcement learning
3 Boosting Graph Foundation Model from Structural Perspective BooG:从结构视角提升图基础模型,解决跨领域图结构差异问题 contrastive learning foundation model
4 A Method for Fast Autonomy Transfer in Reinforcement Learning 提出多Critic Actor-Critic算法,加速强化学习中的自主性迁移 reinforcement learning
5 Quantum Computing and Neuromorphic Computing for Safe, Reliable, and explainable Multi-Agent Reinforcement Learning: Optimal Control in Autonomous Robotics 利用量子和神经形态计算提升自主机器人多智能体强化学习的安全性和可解释性 reinforcement learning
6 Noise-Resilient Unsupervised Graph Representation Learning via Multi-Hop Feature Quality Estimation 提出基于多跳特征质量估计的噪声鲁棒无监督图表示学习方法 representation learning
7 SAPG: Split and Aggregate Policy Gradients 提出SAPG算法,通过分割聚合策略梯度有效利用大规模并行环境 reinforcement learning PPO

🔬 支柱九:具身大模型 (Embodied Foundation Models) (3 篇)

#题目一句话要点标签🔗
8 CoMMIT: Coordinated Multimodal Instruction Tuning CoMMIT:通过协同多模态指令调优提升MLLM性能 large language model multimodal
9 Importance Corrected Neural JKO Sampling 提出基于重要性校正的神经JKO采样方法,用于解决非归一化概率密度函数采样问题。 multimodal
10 Detecting and Understanding Vulnerabilities in Language Models via Mechanistic Interpretability 提出基于可解释性机制的LLM脆弱性检测方法,提升模型安全性。 large language model

🔬 支柱八:物理动画 (Physics-based Animation) (1 篇)

#题目一句话要点标签🔗
11 Orca: Ocean Significant Wave Height Estimation with Spatio-temporally Aware Large Language Models Orca:利用时空感知大语言模型进行海洋有效波高估计 spatiotemporal large language model

⬅️ 返回 cs.LG 首页 · 🏠 返回主页