cs.LG(2024-09-30)

📊 共 21 篇论文 | 🔗 2 篇有代码

🎯 兴趣领域导航

支柱二:RL算法与架构 (RL & Architecture) (9 🔗2) 支柱九:具身大模型 (Embodied Foundation Models) (7) 支柱一:机器人控制 (Robot Control) (3) 支柱四:生成式动作 (Generative Motion) (1) 支柱五:交互与反应 (Interaction & Reaction) (1)

🔬 支柱二:RL算法与架构 (RL & Architecture) (9 篇)

#题目一句话要点标签🔗
1 The Perfect Blend: Redefining RLHF with Mixture of Judges 提出基于混合评判器的约束生成策略优化(CGPO),提升RLHF在多任务学习中的性能。 reinforcement learning PPO RLHF
2 RouterDC: Query-Based Router by Dual Contrastive Learning for Assembling Large Language Models RouterDC:通过双重对比学习的查询式路由,用于组装大型语言模型 contrastive learning large language model
3 Fisher Information-based Efficient Curriculum Federated Learning with Large Language Models 提出FibecFed框架,利用Fisher信息高效地对大语言模型进行联邦学习微调。 curriculum learning large language model
4 A SSM is Polymerized from Multivariate Time Series 提出Poly-Mamba以解决多变量时间序列建模中的复杂依赖问题 Mamba SSM state space model
5 Upper and Lower Bounds for Distributionally Robust Off-Dynamics Reinforcement Learning 提出We-DRIVE-U算法以解决离线强化学习中的动态不确定性问题 reinforcement learning
6 Collaborative Knowledge Distillation via a Learning-by-Education Node Community 提出LENC框架以解决协作知识蒸馏问题 distillation
7 Whole-Graph Representation Learning For the Classification of Signed Networks 针对符号网络分类,提出两种全局图表示学习方法SG2V和WSGCN。 representation learning
8 HYDRA-FL: Hybrid Knowledge Distillation for Robust and Accurate Federated Learning 提出HYDRA-FL,通过混合知识蒸馏提升联邦学习在异构数据和攻击下的鲁棒性和准确性 distillation
9 TSI: A Multi-View Representation Learning Approach for Time Series Forecasting TSI:一种用于时间序列预测的多视角表征学习方法 representation learning

🔬 支柱九:具身大模型 (Embodied Foundation Models) (7 篇)

#题目一句话要点标签🔗
10 Comprehensive Performance Modeling and System Design Insights for Foundation Models 针对不同类型Transformer,提出综合性能建模与系统设计方案。 large language model foundation model
11 Learning Multimodal Latent Generative Models with Energy-Based Prior 提出基于能量的先验多模态生成模型,提升跨模态信息捕获与生成一致性 multimodal
12 Characterizing and Efficiently Accelerating Multimodal Generation Model Inference 针对多模态生成模型推理的性能瓶颈分析与加速优化方案 multimodal
13 Optimizing Cross-Client Domain Coverage for Federated Instruction Tuning of Large Language Models 提出FedDCA,通过优化跨客户端领域覆盖提升联邦指令调优大语言模型性能 large language model
14 Supervised Multi-Modal Fission Learning 提出监督多模态分裂学习(MMFL)模型,用于识别多模态数据中的预测性潜在成分。 multimodal
15 Rotated Runtime Smooth: Training-Free Activation Smoother for accurate INT4 inference 提出Rotated Runtime Smooth,用于提升INT4量化大模型推理精度 large language model
16 Robust LLM safeguarding via refusal feature adversarial training 提出ReFAT:通过拒绝特征对抗训练提升LLM安全性 large language model

🔬 支柱一:机器人控制 (Robot Control) (3 篇)

#题目一句话要点标签🔗
17 M2Distill: Multi-Modal Distillation for Lifelong Imitation Learning M2Distill:面向终身模仿学习的多模态蒸馏方法,解决灾难性遗忘问题 manipulation imitation learning distillation
18 COLLAGE: Collaborative Human-Agent Interaction Generation using Hierarchical Latent Diffusion and Language Models COLLAGE:利用分层潜在扩散和语言模型生成协同人-物-人交互 motion planning motion generation VQ-VAE
19 Breaking the Curse of Multiagency in Robust Multi-Agent Reinforcement Learning 提出基于行为经济学的鲁棒多智能体强化学习算法,克服多智能体诅咒 sim-to-real reinforcement learning

🔬 支柱四:生成式动作 (Generative Motion) (1 篇)

#题目一句话要点标签🔗
20 Probabilistic Classification of Near-Surface Shallow-Water Sediments using A Portable Free-Fall Penetrometer 利用便携式自由落体贯入仪数据,提出基于机器学习的近地表浅水沉积物概率分类方法 penetration

🔬 支柱五:交互与反应 (Interaction & Reaction) (1 篇)

#题目一句话要点标签🔗
21 Comments on "Privacy-Enhanced Federated Learning Against Poisoning Adversaries" 揭示PEFL联邦学习框架隐私泄露问题,并指出修复方案的不足 OMOMO

⬅️ 返回 cs.LG 首页 · 🏠 返回主页