cs.LG(2025-03-19)

📊 共 26 篇论文 | 🔗 2 篇有代码

🎯 兴趣领域导航

支柱二:RL算法与架构 (RL & Architecture) (12 🔗1) 支柱九:具身大模型 (Embodied Foundation Models) (12) 支柱一:机器人控制 (Robot Control) (2 🔗1)

🔬 支柱二:RL算法与架构 (RL & Architecture) (12 篇)

#题目一句话要点标签🔗
1 Continual Multimodal Contrastive Learning 提出一种基于优化的持续多模态对比学习方法,解决模态数据增量式学习中的灾难性遗忘问题。 contrastive learning multimodal
2 Towards Achieving Perfect Multimodal Alignment 提出完美多模态对齐方法,提升跨模态表征学习与迁移性能 representation learning multimodal
3 Learning Topology Actions for Power Grid Control: A Graph-Based Soft-Label Imitation Learning Approach 提出基于图神经网络和软标签模仿学习的电力网络拓扑控制方法 reinforcement learning deep reinforcement learning DRL
4 Application of linear regression and quasi-Newton methods to the deep reinforcement learning in continuous action cases 提出DLS-DDPG方法,结合线性回归与拟牛顿法改进连续动作空间下的深度强化学习。 reinforcement learning deep reinforcement learning
5 VIPER: Visual Perception and Explainable Reasoning for Sequential Decision-Making VIPER:用于序列决策的视觉感知与可解释推理框架 reinforcement learning large language model multimodal
6 Continual Contrastive Learning on Tabular Data with Out of Distribution 提出TCCL:用于表格数据的持续对比学习框架,提升OOD泛化能力 representation learning contrastive learning
7 Good Actions Succeed, Bad Actions Generalize: A Case Study on Why RL Generalizes Better 对比监督学习与强化学习在视觉导航中的泛化能力,揭示强化学习更优泛化的内在机制。 reinforcement learning PPO behavior cloning
8 Robustness of Nonlinear Representation Learning 研究非线性表示学习的鲁棒性,提出近似等距混合下的可辨识性分析方法 representation learning
9 Partially Observable Reinforcement Learning with Memory Traces 提出基于记忆轨迹的强化学习方法,解决部分可观测环境下的长时依赖问题 reinforcement learning
10 LogLLaMA: Transformer-based log anomaly detection with LLaMA LogLLaMA:利用LLaMA进行日志异常检测,显著优于现有方法 reinforcement learning large language model
11 What Makes a Reward Model a Good Teacher? An Optimization Perspective 揭示奖励模型有效性的关键:优化视角下的方差重要性 reinforcement learning RLHF
12 Multi-Agent Actor-Critic with Harmonic Annealing Pruning for Dynamic Spectrum Access Systems 提出基于谐波退火剪枝的多智能体Actor-Critic算法,用于动态频谱接入系统。 reinforcement learning deep reinforcement learning

🔬 支柱九:具身大模型 (Embodied Foundation Models) (12 篇)

#题目一句话要点标签🔗
13 Intelligent Orchestration of Distributed Large Foundation Model Inference at the Edge 提出自适应分割推理编排框架,解决边缘环境下大模型推理的资源动态分配问题 foundation model
14 A Vector-Quantized Foundation Model for Patient Behavior Monitoring 提出基于向量量化的行为监测基础模型,用于患者行为分析与风险评估。 foundation model
15 Robust Transmission of Punctured Text with Large Language Model-based Recovery 提出基于LLM的文本传输模型,通过重要字符选择实现低信噪比下的鲁棒通信。 large language model
16 Pseudo Relevance Feedback is Enough to Close the Gap Between Small and Large Dense Retrieval Models PromptPRF:利用伪相关反馈使小型稠密检索模型媲美大型模型 large language model chain-of-thought
17 SWEET-RL: Training Multi-Turn LLM Agents on Collaborative Reasoning Tasks SWEET-RL:训练多轮LLM智能体进行协作推理任务,性能超越GPT4-o large language model
18 FedBEns: One-Shot Federated Learning based on Bayesian Ensemble FedBEns:基于贝叶斯集成的单轮联邦学习算法 multimodal
19 Understanding the Generalization of In-Context Learning in Transformers: An Empirical Study 研究Transformer在上下文学习中的泛化能力,揭示其在不同任务泛化维度上的表现差异。 large language model
20 Task-Specific Data Selection for Instruction Tuning via Monosemantic Neuronal Activations 提出基于单义神经元激活的任务特定指令调优数据选择方法 large language model
21 LLM-Aided Customizable Profiling of Code Data Based On Programming Language Concepts 提出LLM辅助的代码数据剖析方法,提升代码LLM的数据质量 large language model
22 Prada: Black-Box LLM Adaptation with Private Data on Resource-Constrained Devices Prada:一种在资源受限设备上利用私有数据进行黑盒LLM适配的方案 large language model
23 Enforcing Consistency and Fairness in Multi-level Hierarchical Classification with a Mask-based Output Layer 提出基于Mask的多层级分类输出层,提升一致性与公平性 large language model
24 GReaTER: Generate Realistic Tabular data after data Enhancement and Reduction GReaTER:通过数据增强与降维,生成更逼真的表格数据 large language model

🔬 支柱一:机器人控制 (Robot Control) (2 篇)

#题目一句话要点标签🔗
25 Diffusion-Based Forecasting for Uncertainty-Aware Model Predictive Control 提出基于扩散模型的预测控制框架,用于不确定性感知决策 MPC model predictive control reinforcement learning
26 1000 Layer Networks for Self-Supervised RL: Scaling Depth Can Enable New Goal-Reaching Capabilities 通过扩展深度至1000层,提升自监督强化学习在目标导向任务中的性能。 locomotion manipulation reinforcement learning

⬅️ 返回 cs.LG 首页 · 🏠 返回主页