| 1 |
ORAL: Prompting Your Large-Scale LoRAs via Conditional Recurrent Diffusion |
ORAL:通过条件循环扩散模型提示大规模LoRA,实现可控且可扩展的参数生成。 |
large language model foundation model multimodal |
|
|
| 2 |
Predicting Targeted Therapy Resistance in Non-Small Cell Lung Cancer Using Multimodal Machine Learning |
提出一种多模态机器学习模型,用于预测非小细胞肺癌患者对奥希替尼的耐药性。 |
multimodal |
|
|
| 3 |
LLM4FS: Leveraging Large Language Models for Feature Selection |
LLM4FS:利用大语言模型进行特征选择的混合策略 |
large language model |
✅ |
|
| 4 |
Rethinking Key-Value Cache Compression Techniques for Large Language Model Serving |
重新审视大语言模型服务的键值缓存压缩技术,提升实际部署性能 |
large language model |
✅ |
|
| 5 |
Translating Multimodal AI into Real-World Inspection: TEMAI Evaluation Framework and Pathways for Implementation |
提出TEMAI框架,评估多模态AI在工业检测中的转化能力与实施路径 |
multimodal |
|
|
| 6 |
Communication-Efficient and Personalized Federated Foundation Model Fine-Tuning via Tri-Matrix Adaptation |
提出CE-LoRA,通过三矩阵适应实现通信高效的个性化联邦大模型微调 |
foundation model |
|
|
| 7 |
Timeseries Foundation Models for Mobility: A Benchmark Comparison with Traditional and Deep Learning Models |
评估时间序列预训练模型在城市出行预测中的性能,对比传统与深度学习方法。 |
foundation model |
|
|
| 8 |
Effectively Controlling Reasoning Models through Thinking Intervention |
提出思维干预方法,有效控制推理型大语言模型的推理过程 |
large language model instruction following |
|
|
| 9 |
Inference-Time Scaling for Complex Tasks: Where We Stand and What Lies Ahead |
研究推理时扩展对复杂任务的影响,揭示其局限性与未来潜力 |
large language model |
|
|
| 10 |
Evaluating and Designing Sparse Autoencoders by Approximating Quasi-Orthogonality |
提出基于近似准正交性的稀疏自编码器评估与设计方法,解决超参数k选择难题。 |
large language model |
✅ |
|
| 11 |
Green MLOps to Green GenOps: An Empirical Study of Energy Consumption in Discriminative and Generative AI Operations |
研究判别式与生成式AI模型在MLOps流程中的能耗,为绿色GenOps提供实践指导。 |
large language model |
|
|