| 1 |
Adaptive Layer-Wise Transformations for Post-Training Quantization of Large Language Models |
提出自适应层级变换框架,用于大语言模型后训练量化,显著提升低比特量化性能。 |
large language model |
|
|
| 2 |
Layer-Wise High-Impact Parameter Ratio Optimization in Post-Training Quantization for Large Language Models |
提出层级高影响参数比率优化以解决LLM量化问题 |
large language model |
|
|
| 3 |
PrismSSL: One Interface, Many Modalities; A Single-Interface Library for Multimodal Self-Supervised Learning |
PrismSSL:用于多模态自监督学习的统一接口库 |
multimodal |
|
|
| 4 |
Lane-Frame Quantum Multimodal Driving Forecasts for the Trajectory of Autonomous Vehicles |
提出基于量子计算的车道框架多模态驾驶轨迹预测模型,提升自动驾驶安全性。 |
multimodal |
|
|
| 5 |
FIRM: Federated In-client Regularized Multi-objective Alignment for Large Language Models |
提出FIRM:一种面向大语言模型的联邦客户端正则化多目标对齐方法 |
large language model |
|
|
| 6 |
ReBaPL: Repulsive Bayesian Prompt Learning |
提出ReBaPL,通过排斥贝叶斯提示学习提升大模型在下游任务中的泛化能力。 |
foundation model multimodal |
|
|
| 7 |
ToC: Tree-of-Claims Search with Multi-Agent Language Models |
提出ToC框架,利用多智能体语言模型进行专利声明的树搜索优化。 |
large language model chain-of-thought |
✅ |
|
| 8 |
End-to-End Transformer Acceleration Through Processing-in-Memory Architectures |
提出基于存内计算架构的Transformer端到端加速方案,解决计算、访存和复杂度瓶颈。 |
large language model |
|
|
| 9 |
Why Do Language Model Agents Whistleblow? |
研究LLM智能体在不当行为场景下的“吹哨”行为,揭示道德倾向与任务复杂度的影响 |
large language model |
|
|
| 10 |
PersonalizedRouter: Personalized LLM Routing via Graph-based User Preference Modeling |
提出 PersonalizedRouter,通过图建模用户偏好实现个性化LLM路由。 |
large language model |
|
|