| 1 |
Large Language Models can Deliver Accurate and Interpretable Time Series Anomaly Detection |
提出LLMAD,利用大语言模型实现精确且可解释的时间序列异常检测。 |
large language model chain-of-thought |
|
|
| 2 |
Zero-Shot Spam Email Classification Using Pre-trained Large Language Models |
利用预训练大语言模型进行零样本垃圾邮件分类 |
large language model |
|
|
| 3 |
AMGPT: a Large Language Model for Contextual Querying in Additive Manufacturing |
AMGPT:用于增材制造领域上下文查询的大语言模型 |
large language model |
|
|
| 4 |
Large Language Model Pruning |
提出一种基于互信息估计的大语言模型剪枝方法,提升模型解释性并降低计算资源需求。 |
large language model |
|
|
| 5 |
Scaling Laws for Discriminative Classification in Large Language Models |
将LLM应用于客服,提出判别分类框架提升响应准确性 |
large language model |
|
|
| 6 |
Optimizing Large Language Models for OpenAPI Code Completion |
优化大型语言模型以提升OpenAPI代码补全性能 |
large language model |
|
|
| 7 |
EmpathicStories++: A Multimodal Dataset for Empathy towards Personal Experiences |
提出EmpathicStories++多模态数据集,用于建模AI对个人经历的共情能力。 |
multimodal |
✅ |
|
| 8 |
Sparse Matrix in Large Language Model Fine-tuning |
提出稀疏矩阵微调(SMT)方法,缩小PEFT与全量微调的性能差距,并降低计算和内存成本。 |
large language model |
|
|
| 9 |
The Mosaic Memory of Large Language Models |
揭示大语言模型的Mosaic Memory现象,挑战传统记忆假设 |
large language model |
|
|
| 10 |
Everything is Editable: Extend Knowledge Editing to Unstructured Data in Large Language Models |
UnKE:扩展知识编辑至大语言模型中的非结构化数据 |
large language model |
|
|
| 11 |
BiSup: Bidirectional Quantization Error Suppression for Large Language Models |
BiSup:面向大语言模型的双向量化误差抑制方法,提升低比特量化性能。 |
large language model |
|
|
| 12 |
SCALM: Towards Semantic Caching for Automated Chat Services with Large Language Models |
提出SCALM,通过语义缓存提升LLM聊天服务的效率与降低成本 |
large language model |
|
|
| 13 |
DnA-Eval: Enhancing Large Language Model Evaluation through Decomposition and Aggregation |
DnA-Eval:通过分解与聚合增强大型语言模型评估能力 |
large language model |
|
|
| 14 |
Large Language Model Sentinel: LLM Agent for Adversarial Purification |
提出LLAMOS:利用LLM智能体净化对抗样本,提升LLM的鲁棒性 |
large language model |
|
|
| 15 |
An Evaluation of Estimative Uncertainty in Large Language Models |
评估大型语言模型中估计性不确定性的表达能力及与人类的对齐程度 |
large language model |
|
|
| 16 |
Machine Unlearning in Large Language Models |
提出基于梯度上升的LLM知识遗忘方法,提升模型伦理性和安全性 |
large language model |
|
|
| 17 |
OptLLM: Optimal Assignment of Queries to Large Language Models |
OptLLM:通过优化查询分配,在预算约束下实现大语言模型的最优成本效益。 |
large language model |
|
|
| 18 |
Generalizable and Scalable Multistage Biomedical Concept Normalization Leveraging Large Language Models |
利用大型语言模型提升生物医学概念归一化性能与泛化能力 |
large language model |
|
|
| 19 |
Clustered Retrieved Augmented Generation (CRAG) |
提出CRAG:一种聚类检索增强生成方法,有效降低LLM提示token数量。 |
large language model |
|
|
| 20 |
GECKO: Generative Language Model for English, Code and Korean |
GECKO:面向英语、代码和韩语的生成式语言模型 |
large language model |
✅ |
|
| 21 |
Synergizing In-context Learning with Hints for End-to-end Task-oriented Dialog Systems |
SyncTOD:结合上下文学习与提示,提升端到端任务型对话系统在低数据下的性能 |
large language model |
|
|
| 22 |
Adapting PromptORE for Modern History: Information Extraction from Hispanic Monarchy Documents of the XVIth Century |
提出Biased PromptORE,解决十六世纪西班牙语历史文档关系抽取难题。 |
large language model |
|
|
| 23 |
Linearly Controlled Language Generation with Performative Guarantees |
提出一种具有性能保证的线性控制语言生成方法,用于解决大语言模型在关键应用中的可控性问题。 |
large language model |
|
|
| 24 |
Benchmarking the Performance of Pre-trained LLMs across Urdu NLP Tasks |
评估预训练LLM在乌尔都语NLP任务上的性能,揭示模型能力与语言覆盖度的关系。 |
large language model |
|
|
| 25 |
Organic Data-Driven Approach for Turkish Grammatical Error Correction and LLMs |
提出一种有机数据驱动方法,用于土耳其语语法纠错和LLM训练 |
large language model |
|
|
| 26 |
Before Generation, Align it! A Novel and Effective Strategy for Mitigating Hallucinations in Text-to-SQL Generation |
提出任务对齐策略TA-SQL,缓解文本到SQL生成中的幻觉问题 |
large language model |
|
|
| 27 |
DeTikZify: Synthesizing Graphics Programs for Scientific Figures and Sketches with TikZ |
DeTikZify:提出一种多模态语言模型,用于从草图和现有图形合成TikZ图形程序。 |
multimodal |
|
|
| 28 |
Decoding at the Speed of Thought: Harnessing Parallel Decoding of Lexical Units for LLMs |
提出词汇单元解码LUD,加速LLM推理且不损失生成质量 |
large language model |
✅ |
|
| 29 |
Cross-Task Defense: Instruction-Tuning LLMs for Content Safety |
提出一种基于指令微调的跨任务防御方法,提升LLM在处理恶意内容时的安全性。 |
large language model |
|
|
| 30 |
RAEE: A Robust Retrieval-Augmented Early Exiting Framework for Efficient Inference |
提出RAEE框架,通过检索增强实现大语言模型高效且鲁棒的提前退出推理。 |
large language model |
|
|
| 31 |
EffiLearner: Enhancing Efficiency of Generated Code via Self-Optimization |
EffiLearner:通过自优化提升大语言模型生成代码的效率 |
large language model |
✅ |
|
| 32 |
VB-LoRA: Extreme Parameter Efficient Fine-Tuning with Vector Banks |
VB-LoRA:利用向量库实现超高参数效率的微调 |
large language model |
✅ |
|
| 33 |
Expert-Token Resonance MoE: Bidirectional Routing with Efficiency Affinity-Driven Active Selection |
提出专家-令牌共振MoE模型,通过双向路由和效率亲和驱动选择提升训练效率和模型性能。 |
large language model |
|
|
| 34 |
SoAy: A Solution-based LLM API-using Methodology for Academic Information Seeking |
提出SoAy:一种基于解决方案的LLM API调用方法,用于学术信息检索 |
large language model |
✅ |
|