| 1 |
TS-HINT: Enhancing Semiconductor Time Series Regression Using Attention Hints From Large Language Model Reasoning |
TS-HINT:利用大语言模型推理提示增强半导体时序回归 |
large language model foundation model chain-of-thought |
|
|
| 2 |
Taxonomy-Adaptive Moderation Model with Robust Guardrails for Large Language Models |
提出Roblox Guard 1.0,增强LLM系统输入输出安全性的分类自适应审核模型 |
large language model chain-of-thought |
|
|
| 3 |
Scaling and Transferability of Annealing Strategies in Large Language Model Training |
提出一种可迁移的学习率退火策略优化框架,提升大语言模型训练效率。 |
large language model |
|
|
| 4 |
Poodle: Seamlessly Scaling Down Large Language Models with Just-in-Time Model Replacement |
Poodle:即时模型替换,无缝缩减大语言模型规模 |
large language model |
|
|
| 5 |
The Forgotten Shield: Safety Grafting in Parameter-Space for Medical MLLMs |
提出参数空间安全嫁接方法,提升医学多模态大语言模型的安全性。 |
large language model multimodal |
|
|
| 6 |
Physics-Informed Neural Koopman Machine for Interpretable Longitudinal Personalized Alzheimer's Disease Forecasting |
提出神经Koopman机(NKM),用于可解释的阿尔茨海默病纵向个性化预测。 |
multimodal |
|
|
| 7 |
MaxShapley: Towards Incentive-compatible Generative Search with Fair Context Attribution |
提出MaxShapley算法,用于检索增强生成搜索中的激励兼容和公平内容归因。 |
large language model |
|
|
| 8 |
Impugan: Learning Conditional Generative Models for Robust Data Imputation |
Impugan:一种用于鲁棒数据插补的条件生成对抗网络模型 |
multimodal |
|
|
| 9 |
KQ-SVD: Compressing the KV Cache with Provable Guarantees on Attention Fidelity |
KQ-SVD:通过优化Attention矩阵低秩分解压缩KV缓存,提升LLM推理效率 |
large language model |
|
|
| 10 |
Mitigating Catastrophic Forgetting in Mathematical Reasoning Finetuning through Mixed Training |
提出混合训练策略,缓解数学推理微调中的灾难性遗忘问题 |
large language model |
|
|
| 11 |
Bootstrapping Fuzzers for Compilers of Low-Resource Language Dialects Using Language Models |
Germinator利用语言模型为低资源语言编译器自动生成fuzzer,提升测试效率。 |
large language model |
|
|
| 12 |
Feasibility of AI-Assisted Programming for End-User Development |
探索AI辅助编程在终端用户开发中的可行性,以替代或补充低代码/无代码平台。 |
large language model |
|
|
| 13 |
RevoNAD: Reflective Evolutionary Exploration for Neural Architecture Design |
RevoNAD:一种反射式进化探索的神经架构设计方法,提升架构搜索的可靠性和部署性。 |
large language model |
|
|
| 14 |
When Forgetting Builds Reliability: LLM Unlearning for Reliable Hardware Code Generation |
提出面向硬件代码生成的LLM遗忘框架,提升代码可靠性 |
large language model |
|
|