| 1 |
PixelBytes: Catching Unified Embedding for Multimodal Generation |
提出PixelBytes嵌入,用于统一多模态表示学习和序列生成。 |
Mamba SSM state space model |
|
|
| 2 |
PMT-MAE: Dual-Branch Self-Supervised Learning with Distillation for Efficient Point Cloud Classification |
PMT-MAE:双分支自监督学习与蒸馏,高效点云分类。 |
masked autoencoder MAE distillation |
|
|
| 3 |
Shuffle Mamba: State Space Models with Random Shuffle for Multi-Modal Image Fusion |
提出Shuffle Mamba以解决多模态图像融合中的偏差问题 |
Mamba state space model |
|
|
| 4 |
LinFusion: 1 GPU, 1 Minute, 16K Image |
LinFusion:利用线性注意力机制,单GPU一分钟生成16K图像 |
Mamba linear attention spatial relationship |
✅ |
|
| 5 |
Dual Advancement of Representation Learning and Clustering for Sparse and Noisy Images |
DARLC:针对稀疏噪声图像,同步提升表征学习与聚类性能 |
representation learning contrastive learning |
✅ |
|
| 6 |
Efficient Point Cloud Classification via Offline Distillation Framework and Negative-Weight Self-Distillation Technique |
提出离线蒸馏框架与负权重自蒸馏,提升点云分类效率并降低模型复杂度。 |
distillation |
|
|
| 7 |
Latent Distillation for Continual Object Detection at the Edge |
提出面向边缘设备持续目标检测的潜空间蒸馏方法 |
distillation |
|
|
| 8 |
AstroMAE: Redshift Prediction Using a Masked Autoencoder with a Novel Fine-Tuning Architecture |
AstroMAE:提出一种基于掩码自编码器和新型微调架构的红移预测方法 |
masked autoencoder |
|
|
| 9 |
Adaptive Explicit Knowledge Transfer for Knowledge Distillation |
提出自适应显式知识迁移(AEKT)方法,提升Logit蒸馏性能。 |
distillation |
|
|
| 10 |
Improving Apple Object Detection with Occlusion-Enhanced Distillation |
提出遮挡增强蒸馏方法,提升苹果目标检测在自然遮挡下的鲁棒性 |
distillation |
|
|