| 1 |
Multilingual-To-Multimodal (M2M): Unlocking New Languages with Monolingual Text |
提出METAL,利用单语文本解锁多语言到多模态的零样本迁移能力 |
multimodal zero-shot transfer |
✅ |
|
| 2 |
ProbFM: Probabilistic Time Series Foundation Model with Uncertainty Decomposition |
ProbFM:基于不确定性分解的概率时间序列基础模型,用于金融预测。 |
foundation model |
|
|
| 3 |
LeMoF: Level-guided Multimodal Fusion for Heterogeneous Clinical Data |
提出LeMoF,通过层级引导的多模态融合提升异构临床数据预测精度。 |
multimodal |
|
|
| 4 |
PID-Guided Partial Alignment for Multimodal Decentralized Federated Learning |
PARSE:一种PID引导的局部对齐多模态去中心化联邦学习框架 |
multimodal |
|
|
| 5 |
Unlabeled Data Can Provably Enhance In-Context Learning of Transformers |
提出一种增强的上下文学习框架,利用无标签数据提升Transformer性能 |
large language model chain-of-thought |
|
|
| 6 |
Single-Stage Huffman Encoder for ML Compression |
提出单阶段霍夫曼编码器,解决LLM压缩中带宽瓶颈问题 |
large language model |
|
|
| 7 |
PACEvolve: Enabling Long-Horizon Progress-Aware Consistent Evolution |
PACEvolve:实现长程、感知进度且一致的进化搜索框架 |
large language model |
|
|
| 8 |
LangLasso: Interactive Cluster Descriptions through LLM Explanation |
提出LangLasso以解决聚类解释的可访问性问题 |
large language model |
|
|
| 9 |
Queueing-Aware Optimization of Reasoning Tokens for Accuracy-Latency Trade-offs in LLM Servers |
针对LLM服务器,提出队列感知的推理Token优化方法,实现精度-延迟权衡。 |
large language model |
|
|
| 10 |
In-Context Source and Channel Coding |
提出In-Context解码框架,提升LLM驱动的算术编码在低信噪比下的文本传输鲁棒性 |
large language model |
|
|
| 11 |
LOOKAT: Lookup-Optimized Key-Attention for Memory-Efficient Transformers |
提出LOOKAT,通过查找优化的键注意力机制实现Transformer的内存高效压缩。 |
large language model |
|
|
| 12 |
Understanding and Preserving Safety in Fine-Tuned LLMs |
提出SPF安全保持微调方法,解决LLM微调中安全性和效用性冲突问题 |
large language model |
|
|
| 13 |
FaTRQ: Tiered Residual Quantization for LLM Vector Search in Far-Memory-Aware ANNS Systems |
提出FaTRQ以解决ANNS系统中的存储与延迟问题 |
multimodal |
|
|
| 14 |
An Exploratory Study to Repurpose LLMs to a Unified Architecture for Time Series Classification |
探索性研究:重用LLM为统一架构用于时间序列分类 |
large language model |
|
|