| 1 |
The Geometry of Persona: Disentangling Personality from Reasoning in Large Language Models |
提出Soul Engine框架,通过解耦人格与推理能力实现安全可控的LLM个性化 |
large language model |
|
|
| 2 |
FOAM: Blocked State Folding for Memory-Efficient LLM Training |
FOAM:面向内存高效LLM训练的块状状态折叠优化器 |
large language model |
|
|
| 3 |
Balanced Accuracy: The Right Metric for Evaluating LLM Judges -- Explained through Youden's J statistic |
提出使用Balanced Accuracy评估LLM Judge,解决传统指标对类别不平衡的敏感性问题 |
large language model |
|
|
| 4 |
LUNE: Efficient LLM Unlearning via LoRA Fine-Tuning with Negative Examples |
LUNE:基于负例LoRA微调的高效LLM知识遗忘框架 |
large language model |
|
|
| 5 |
Recover-to-Forget: Gradient Reconstruction from LoRA for Efficient LLM Unlearning |
提出Recover-to-Forget框架,通过LoRA梯度重构实现高效LLM遗忘学习。 |
foundation model |
|
|