| 1 |
How do Multimodal Foundation Models Encode Text and Speech? An Analysis of Cross-Lingual and Cross-Modal Representations |
分析多模态模型如何编码文本与语音,揭示跨语言和跨模态表征差异 |
foundation model multimodal |
|
|
| 2 |
Leveraging Large Language Models and Topic Modeling for Toxicity Classification |
利用大型语言模型和主题建模改进毒性分类,提升模型公平性。 |
large language model |
✅ |
|
| 3 |
What Differentiates Educational Literature? A Multimodal Fusion Approach of Transformers and Computational Linguistics |
提出Transformer与计算语言学融合的多模态方法,用于评估教育文本难度并辅助课程适配。 |
multimodal |
|
|
| 4 |
Strategic Prompting for Conversational Tasks: A Comparative Analysis of Large Language Models Across Diverse Conversational Tasks |
对比分析大型语言模型在不同对话任务中的策略性提示效果 |
large language model |
|
|
| 5 |
One Mind, Many Tongues: A Deep Dive into Language-Agnostic Knowledge Neurons in Large Language Models |
提出MATRICE方法,解决大语言模型中语言无关知识神经元定位不确定性问题。 |
large language model |
|
|
| 6 |
Natural Language Understanding and Inference with MLLM in Visual Question Answering: A Survey |
综述:基于多模态大语言模型在视觉问答中的自然语言理解与推理 |
large language model multimodal |
|
|
| 7 |
Star Attention: Efficient LLM Inference over Long Sequences |
提出Star Attention,通过块稀疏注意力加速长序列LLM推理。 |
large language model |
|
|
| 8 |
Different Bias Under Different Criteria: Assessing Bias in LLMs with a Fact-Based Approach |
提出基于事实的LLM偏见评估指标,揭示不同标准下的偏见差异 |
large language model |
|
|
| 9 |
Meaningless is better: hashing bias-inducing words in LLM prompts improves performance in logical reasoning and statistical learning |
提出一种基于哈希的提示方法,提升LLM在逻辑推理和统计学习中的性能 |
large language model |
|
|
| 10 |
Socio-Emotional Response Generation: A Human Evaluation Protocol for LLM-Based Conversational Systems |
提出一种基于社会情感策略规划的对话系统,提升LLM生成回复的质量和可控性。 |
large language model |
|
|
| 11 |
Overcoming Non-monotonicity in Transducer-based Streaming Generation |
提出MonoAttn-Transducer,解决Transducer在非单调对齐流式生成任务中的问题。 |
TAMP |
|
|
| 12 |
Adaptive Deployment of Untrusted LLMs Reduces Distributed Threats |
自适应部署不可信LLM以降低分布式威胁 |
large language model |
|
|
| 13 |
Enhancing Character-Level Understanding in LLMs through Token Internal Structure Learning |
TIPA:通过学习Token内部结构提升LLM的字符级理解能力 |
large language model |
|
|