| 1 |
Self-supervised Quantized Representation for Seamlessly Integrating Knowledge Graphs with Large Language Models |
提出自监督量化表示SSQR,实现知识图谱与大语言模型的无缝集成 |
large language model instruction following |
|
|
| 2 |
Large Language Models with Temporal Reasoning for Longitudinal Clinical Summarization and Prediction |
评估LLM在纵向临床数据上的时序推理能力,用于病历总结和诊断预测 |
large language model chain-of-thought |
|
|
| 3 |
Differentially Private Steering for Large Language Model Alignment |
提出PSA算法,通过差分隐私指导LLM对齐,保护私有数据。 |
large language model |
|
|
| 4 |
A Multi-Layered Large Language Model Framework for Disease Prediction |
提出多层大语言模型框架,用于提升社交医疗场景下的疾病预测能力 |
large language model |
|
|
| 5 |
Mining for Species, Locations, Habitats, and Ecosystems from Scientific Papers in Invasion Biology: A Large-Scale Exploratory Study with Large Language Models |
利用大型语言模型从入侵生物学文献中挖掘物种、地点、栖息地和生态系统信息。 |
large language model |
|
|
| 6 |
Panacea: Mitigating Harmful Fine-tuning for Large Language Models via Post-fine-tuning Perturbation |
Panacea:通过后微调扰动缓解大型语言模型的有害微调攻击 |
large language model |
✅ |
|
| 7 |
Examining the Robustness of Large Language Models across Language Complexity |
考察大语言模型在不同语言复杂度下的鲁棒性,聚焦学生写作文本分析场景。 |
large language model |
|
|
| 8 |
Contextually Structured Token Dependency Encoding for Large Language Models |
提出上下文结构化Token依赖编码,提升大语言模型生成序列的上下文连贯性。 |
large language model |
|
|
| 9 |
Mixed-Precision Graph Neural Quantization for Low Bit Large Language Models |
提出MG-PTQ,利用图神经网络进行低比特大语言模型混合精度量化。 |
large language model |
|
|
| 10 |
Statistical multi-metric evaluation and visualization of LLM system predictive performance |
提出LLM系统性能统计评估与可视化框架,辅助系统配置决策。 |
large language model |
|
|
| 11 |
Rope to Nope and Back Again: A New Hybrid Attention Strategy |
提出混合注意力机制,提升长文本LLM在长短上下文任务中的性能与效率 |
large language model |
|
|
| 12 |
Thoughts Are All Over the Place: On the Underthinking of o1-Like LLMs |
针对o1类LLM推理过程中的“欠思考”问题,提出TIP解码策略。 |
large language model |
|
|
| 13 |
Streaming DiLoCo with overlapping communication: Towards a Distributed Free Lunch |
提出流式DiLoCo算法,通过通信重叠显著降低分布式LLM训练的带宽需求。 |
large language model |
|
|
| 14 |
RbFT: Robust Fine-tuning for Retrieval-Augmented Generation against Retrieval Defects |
提出RbFT,增强RAG系统在检索缺陷下的鲁棒性 |
large language model |
|
|
| 15 |
Efficiency and Effectiveness of LLM-Based Summarization of Evidence in Crowdsourced Fact-Checking |
利用LLM摘要提升众包事实核查效率与效果 |
large language model |
|
|