| 1 |
What Level of Automation is "Good Enough"? A Benchmark of Large Language Models for Meta-Analysis Data Extraction |
评估大语言模型在Meta分析数据抽取中的自动化水平,并提出实用指南。 |
large language model |
|
|
| 2 |
Theoretical Foundations and Mitigation of Hallucination in Large Language Models |
本文为大语言模型幻觉问题提供理论基础、检测方法与缓解策略。 |
large language model |
|
|
| 3 |
MUR: Momentum Uncertainty guided Reasoning for Large Language Models |
提出动量不确定性引导推理以提升大语言模型的推理效率 |
large language model |
|
|
| 4 |
From Neurons to Semantics: Evaluating Cross-Linguistic Alignment Capabilities of Large Language Models via Neurons Alignment |
提出 NeuronXA,通过神经元对齐评估大型语言模型跨语言对齐能力 |
large language model |
|
|
| 5 |
Filling the Gap: Is Commonsense Knowledge Generation useful for Natural Language Inference? |
探索常识知识生成对自然语言推理的效用,利用大语言模型弥补知识鸿沟。 |
large language model |
|
|
| 6 |
WebShaper: Agentically Data Synthesizing via Information-Seeking Formalization |
提出WebShaper以解决信息检索代理数据合成问题 |
large language model |
|
|
| 7 |
A Penalty Goes a Long Way: Measuring Lexical Diversity in Synthetic Texts Under Prompt-Influenced Length Variations |
提出Penalty-Adjusted Type-Token Ratio (PATTR),解决提示影响下合成文本长度变化导致的词汇多样性度量偏差问题。 |
large language model |
|
|
| 8 |
Sparse Autoencoder-guided Supervised Finetuning to Mitigate Unexpected Code-Switching in LLMs |
提出SASFT方法,通过稀疏自编码器指导监督微调,显著缓解LLM中意外的代码切换问题 |
large language model |
|
|
| 9 |
MEKiT: Multi-source Heterogeneous Knowledge Injection Method via Instruction Tuning for Emotion-Cause Pair Extraction |
MEKiT:一种用于情感原因对抽取的指令调优多源异构知识注入方法 |
large language model |
|
|
| 10 |
Tiny language models |
探索小型语言模型:验证其预训练有效性及可扩展性,降低NLP研究门槛 |
large language model |
✅ |
|
| 11 |
Doc2Chart: Intent-Driven Zero-Shot Chart Generation from Documents |
Doc2Chart:提出意图驱动的文档零样本图表生成框架 |
large language model |
|
|
| 12 |
FastLongSpeech: Enhancing Large Speech-Language Models for Efficient Long-Speech Processing |
FastLongSpeech:通过迭代融合和动态压缩训练,提升LSLM在长语音处理中的效率。 |
large language model |
|
|