| 1 |
Enhancing Clinical Note Generation with ICD-10, Clinical Ontology Knowledge Graphs, and Chain-of-Thought Prompting Using GPT-4 |
利用ICD-10、临床知识图谱和CoT提示,增强GPT-4的临床笔记生成能力 |
large language model chain-of-thought |
|
|
| 2 |
Challenging the Abilities of Large Language Models in Italian: a Community Initiative |
CALAMITA:意大利语大型语言模型能力评测的社区驱动基准 |
large language model |
|
|
| 3 |
Model Whisper: Steering Vectors Unlock Large Language Models' Potential in Test-time |
提出测试时引导向量,无需微调即可提升大语言模型在特定任务上的性能。 |
large language model |
|
|
| 4 |
LexGenius: An Expert-Level Benchmark for Large Language Models in Legal General Intelligence |
提出LexGenius:一个专家级中文法律通用智能大语言模型评测基准。 |
large language model |
✅ |
|
| 5 |
RapidUn: Influence-Driven Parameter Reweighting for Efficient Large Language Model Unlearning |
RapidUn:基于影响力的参数重加权高效实现大语言模型遗忘 |
large language model |
|
|
| 6 |
To Think or Not to Think: The Hidden Cost of Meta-Training with Excessive CoT Examples |
CoT-Recipe:通过调节CoT样本比例提升元训练中LLM的推理能力 |
large language model chain-of-thought |
|
|
| 7 |
Arbitrage: Efficient Reasoning via Advantage-Aware Speculation |
Arbitrage:利用优势感知推测实现高效推理 |
large language model chain-of-thought |
|
|
| 8 |
Decoding the Black Box: Discerning AI Rhetorics About and Through Poetic Prompting |
利用诗歌提示模式解码大型语言模型的算法倾向与偏差 |
large language model |
|
|
| 9 |
LLMs Know More Than Words: A Genre Study with Syntax, Metaphor & Phonetics |
提出多语言文类分类数据集,探究LLM对深层语言属性的理解能力 |
large language model |
|
|
| 10 |
Mitigating Catastrophic Forgetting in Target Language Adaptation of LLMs via Source-Shielded Updates |
提出源屏蔽更新方法,缓解LLM目标语言适应中的灾难性遗忘 |
large language model |
|
|
| 11 |
DAMASHA: Detecting AI in Mixed Adversarial Texts via Segmentation with Human-interpretable Attribution |
DAMASHA:通过可解释归因分割检测混合对抗文本中的AI生成内容 |
large language model |
|
|
| 12 |
DaLA: Danish Linguistic Acceptability Evaluation Guided by Real World Errors |
提出DaLA,一个基于真实世界丹麦语错误的语言可接受性评估基准。 |
large language model |
|
|
| 13 |
EtCon: Edit-then-Consolidate for Reliable Knowledge Editing |
EtCon:编辑-整合范式,提升大语言模型知识编辑的可靠性 |
large language model |
|
|
| 14 |
SignRoundV2: Closing the Performance Gap in Extremely Low-Bit Post-Training Quantization for LLMs |
SignRoundV2:弥合LLM极低比特后训练量化中的性能差距 |
large language model |
✅ |
|
| 15 |
ADAPT: Learning Task Mixtures for Budget-Constrained Instruction Tuning |
ADAPT:学习任务混合比例,解决预算约束下的指令调优问题。 |
instruction following |
|
|
| 16 |
AdmTree: Compressing Lengthy Context with Adaptive Semantic Trees |
AdmTree:提出自适应语义树压缩长文本上下文,提升LLM处理效率。 |
large language model |
|
|
| 17 |
EvoEdit: Lifelong Free-Text Knowledge Editing through Latent Perturbation Augmentation and Knowledge-driven Parameter Fusion |
提出EvoEdit以解决大语言模型知识更新难题 |
large language model |
|
|
| 18 |
ClusterFusion: Hybrid Clustering with Embedding Guidance and LLM Adaptation |
提出ClusterFusion,一种结合嵌入引导和LLM适应的混合聚类框架,提升领域特定文本聚类性能。 |
large language model |
|
|