cs.LG(2025-04-27)

📊 共 8 篇论文 | 🔗 2 篇有代码

🎯 兴趣领域导航

支柱二:RL算法与架构 (RL & Architecture) (6) 支柱一:机器人控制 (Robot Control) (1 🔗1) 支柱九:具身大模型 (Embodied Foundation Models) (1 🔗1)

🔬 支柱二:RL算法与架构 (RL & Architecture) (6 篇)

#题目一句话要点标签🔗
1 Contextual Online Uncertainty-Aware Preference Learning for Human Feedback 提出上下文在线不确定性感知偏好学习框架,用于从人类反馈中优化模型。 reinforcement learning preference learning RLHF
2 Adaptive Helpfulness-Harmlessness Alignment with Preference Vectors 提出Preference Vector框架以解决LLM的有用性与无害性平衡问题 reinforcement learning RLHF DPO
3 Supervised Pretraining for Material Property Prediction 提出监督预训练方法,利用材料类别信息提升材料属性预测性能。 representation learning MAE foundation model
4 HyperController: A Hyperparameter Controller for Fast and Stable Training of Reinforcement Learning Neural Networks 提出HyperController,加速强化学习神经网络的超参数优化。 reinforcement learning
5 Attention to Detail: Fine-Scale Feature Preservation-Oriented Geometric Pre-training for AI-Driven Surrogate Modeling 提出一种面向细节保留的几何预训练方法,用于AI驱动的代理建模。 representation learning foundation model
6 Swapped Logit Distillation via Bi-level Teacher Alignment 提出基于双层教师对齐的交换Logit蒸馏方法,提升知识迁移性能。 distillation

🔬 支柱一:机器人控制 (Robot Control) (1 篇)

#题目一句话要点标签🔗
7 Fast and Robust: Task Sampling with Posterior and Diversity Synergies for Adaptive Decision-Makers in Randomized Environments 提出PDTS方法,通过后验与多样性协同的任务采样,提升随机环境下的自适应决策鲁棒性 domain randomization reinforcement learning predictive model

🔬 支柱九:具身大模型 (Embodied Foundation Models) (1 篇)

#题目一句话要点标签🔗
8 Hierarchical Attention Generates Better Proofs 提出层级注意力机制,提升大语言模型在形式化定理证明中的性能 large language model

⬅️ 返回 cs.LG 首页 · 🏠 返回主页