| 1 |
MLLM Is a Strong Reranker: Advancing Multimodal Retrieval-augmented Generation via Knowledge-enhanced Reranking and Noise-injected Training |
RagVL:通过知识增强重排序和噪声注入训练,提升多模态检索增强生成效果 |
large language model multimodal |
✅ |
|
| 2 |
A Taxonomy of Stereotype Content in Large Language Models |
提出大语言模型刻板印象内容分类法以解决偏见问题 |
large language model |
|
|
| 3 |
KemenkeuGPT: Leveraging a Large Language Model on Indonesia's Government Financial Data and Regulations to Enhance Decision Making |
KemenkeuGPT:利用大语言模型增强印尼政府金融数据决策 |
large language model |
|
|
| 4 |
MoMa: Efficient Early-Fusion Pre-training with Mixture of Modality-Aware Experts |
MoMa:通过模态感知专家混合加速多模态早期融合预训练。 |
multimodal |
|
|
| 5 |
Inductive or Deductive? Rethinking the Fundamental Reasoning Abilities of LLMs |
提出SolverLearner框架,用于评估LLM的归纳推理能力并发现其演绎推理短板 |
large language model |
|
|
| 6 |
CEAR: Automatic construction of a knowledge graph of chemical entities and roles from scientific literature |
提出CEAR方法,从科学文献自动构建化学实体与角色知识图谱。 |
large language model |
|
|
| 7 |
TransferTOD: A Generalizable Chinese Multi-Domain Task-Oriented Dialogue System with Transfer Capabilities |
提出TransferTOD,一个具备迁移能力的通用中文多领域任务型对话系统。 |
large language model |
✅ |
|
| 8 |
A Performance Study of LLM-Generated Code on Leetcode |
评估LLM生成代码在Leetcode上的性能,发现其效率可与人类编写代码媲美甚至更优 |
large language model |
|
|
| 9 |
Deceptive AI systems that give explanations are more convincing than honest AI systems and can amplify belief in misinformation |
欺骗性AI解释比诚实AI更具说服力,并能放大对错误信息的信任 |
large language model |
|
|
| 10 |
MetaOpenFOAM: an LLM-based multi-agent framework for CFD |
MetaOpenFOAM:基于LLM的多智能体CFD框架,实现自然语言驱动的自动化仿真。 |
large language model |
|
|
| 11 |
SAKR: Enhancing Retrieval-Augmented Generation via Streaming Algorithm and K-Means Clustering |
SAKR:通过流式算法和K-Means聚类增强检索增强生成效果 |
large language model |
|
|