| 1 |
Learning Multimodal Latent Space with EBM Prior and MCMC Inference |
提出EBM先验与MCMC推理的多模态隐空间学习方法,提升跨模态生成效果 |
multimodal |
|
|
| 2 |
Towards Foundation Models for the Industrial Forecasting of Chemical Kinetics |
提出基于MLP-Mixer的工业化学动力学预测基础模型方法 |
foundation model |
|
|
| 3 |
AnyGraph: Graph Foundation Model in the Wild |
AnyGraph:面向通用图学习的图基础模型,解决异构图数据泛化难题。 |
foundation model |
|
|
| 4 |
CoRA: Collaborative Information Perception by Large Language Model's Weights for Recommendation |
提出CoRA以解决大语言模型推荐中的协同信息整合问题 |
large language model |
|
|
| 5 |
LLM-Barber: Block-Aware Rebuilder for Sparsity Mask in One-Shot for Large Language Models |
LLM-Barber:一种面向大语言模型的一次性块感知稀疏掩码重建方法 |
large language model |
✅ |
|
| 6 |
Do Neural Scaling Laws Exist on Graph Self-Supervised Learning? |
揭示图自监督学习的规模定律缺失:现有方法难以支撑图基础模型的构建 |
foundation model |
✅ |
|
| 7 |
A Little Confidence Goes a Long Way |
提出基于LLM隐层激活探针的二分类方法,在低计算资源下实现媲美大型LLM的性能。 |
large language model |
|
|
| 8 |
DOMBA: Double Model Balancing for Access-Controlled Language Models via Minimum-Bounded Aggregation |
提出DOMBA:通过最小有界聚合的双模型平衡方法,用于访问控制语言模型。 |
large language model |
|
|
| 9 |
Tracing Privacy Leakage of Language Models to Training Data via Adjusted Influence Functions |
提出启发式调整的影响函数(HAIF),以更精确地追踪语言模型中的隐私泄露。 |
large language model |
|
|