| 1 |
DyQ-VLA: Temporal-Dynamic-Aware Quantization for Embodied Vision-Language-Action Models |
DyQ-VLA:面向具身视觉-语言-动作模型的时间动态感知量化方法 |
vision-language-action VLA |
|
|
| 2 |
Distributional Regression with Tabular Foundation Models: Evaluating Probabilistic Predictions via Proper Scoring Rules |
利用Proper Scoring Rules评估表格数据PFN的概率预测,提升分布回归性能 |
foundation model |
|
|
| 3 |
Deterministic Differentiable Structured Pruning for Large Language Models |
提出确定性可微结构化剪枝(DDP),用于高效压缩大型语言模型。 |
large language model |
|
|
| 4 |
Impermanent: A Live Benchmark for Temporal Generalization in Time Series Forecasting |
提出Impermanent基准以解决时间序列预测中的通用性评估问题 |
foundation model |
✅ |
|
| 5 |
Efficient Credal Prediction through Decalibration |
提出基于解校准的高效可信预测方法,适用于复杂模型的不确定性量化 |
foundation model |
|
|
| 6 |
LycheeCluster: Efficient Long-Context Inference with Structure-Aware Chunking and Hierarchical KV Indexing |
LycheeCluster:通过结构感知分块和分层KV索引实现高效长文本推理 |
large language model |
|
|
| 7 |
Fibration Policy Optimization |
提出Fibration Policy Optimization,用于大规模语言模型多尺度分层策略优化。 |
large language model |
|
|
| 8 |
SERQ: Saliency-Aware Low-Rank Error Reconstruction for LLM Quantization |
SERQ:面向LLM量化的显著性感知低秩误差重构方法 |
large language model |
|
|
| 9 |
AutoAdapt: An Automated Domain Adaptation Framework for LLMs |
AutoAdapt:一种面向LLM的自动化领域自适应框架,提升专业领域性能。 |
large language model |
|
|
| 10 |
Invisible Safety Threat: Malicious Finetuning for LLM via Steganography |
提出一种基于隐写术的恶意微调方法,使LLM在表面安全下秘密生成有害内容。 |
large language model |
|
|
| 11 |
EAGLE-Pangu: Accelerator-Safe Tree Speculative Decoding on Ascend NPUs |
EAGLE-Pangu:昇腾NPU上加速器安全的树状推测解码 |
large language model |
|
|
| 12 |
Tiny Autoregressive Recursive Models |
探索自回归模型中的递归机制:对Tiny递归模型在自回归任务中的有效性进行评估 |
foundation model |
|
|
| 13 |
Stabilized Fine-Tuning with LoRA in Federated Learning: Mitigating the Side Effect of Client Size and Rank via the Scaling Factor |
提出SFed-LoRA,通过自适应缩放因子解决联邦学习中LoRA微调的不稳定性问题。 |
large language model |
|
|
| 14 |
Capacity-Aware Mixture Law Enables Efficient LLM Data Optimization |
提出容量感知混合律CAMEL,高效优化LLM数据配比并提升性能 |
large language model |
|
|
| 15 |
FedMomentum: Preserving LoRA Training Momentum in Federated Fine-Tuning |
FedMomentum:联邦微调中保留LoRA训练动量的框架 |
large language model |
|
|
| 16 |
ELLMob: Event-Driven Human Mobility Generation with Self-Aligned LLM Framework |
ELLMob:基于自对齐LLM框架的事件驱动型人类移动模式生成 |
large language model |
✅ |
|
| 17 |
LeJOT-AutoML: LLM-Driven Feature Engineering for Job Execution Time Prediction in Databricks Cost Optimization |
LeJOT-AutoML:基于LLM的特征工程,优化Databricks作业执行时间预测 |
large language model |
|
|
| 18 |
Reject, Resample, Repeat: Understanding Parallel Reasoning in Language Model Inference |
提出基于粒子滤波的语言模型并行推理框架,优化采样效率并分析其理论极限。 |
large language model |
|
|