CoRA: Collaborative Information Perception by Large Language Model's Weights for Recommendation

📄 arXiv: 2408.10645v3 📥 PDF

作者: Yuting Liu, Jinghao Zhang, Yizhou Dang, Yuliang Liang, Qiang Liu, Guibing Guo, Jianzhe Zhao, Xingwei Wang

分类: cs.IR, cs.LG

发布日期: 2024-08-20 (更新: 2024-10-25)


💡 一句话要点

提出CoRA以解决大语言模型推荐中的协同信息整合问题

🎯 匹配领域: 支柱九:具身大模型 (Embodied Foundation Models)

关键词: 大语言模型 推荐系统 协同过滤 个性化推荐 信息整合 低秩特性 模型微调

📋 核心要点

  1. 现有方法在将协同信息与大语言模型结合时,容易导致模型固有知识的损失和语义的混乱。
  2. CoRA通过协同查询生成器将协同信息与LLM的参数空间对齐,避免了微调和语义干扰。
  3. 大量实验结果表明,CoRA在推荐任务中显著提升了性能,验证了其有效性。

📝 摘要(中文)

在大语言模型(LLMs)中引入协同信息是一种适应推荐任务的有效技术。现有方法通过将协同特征与文本标记串联输入并进行微调来实现,但存在两个主要局限性:一是微调可能削弱LLM的固有知识和基本能力,二是将协同特征纳入文本提示会破坏原有语义。为此,本文提出了一种新范式——协同LoRA(CoRA),通过协同查询生成器将协同信息与LLM的参数空间对齐,而非输入空间,从而在不改变LLM的知识和推理能力的情况下,使其感知协同信息。实验表明,CoRA有效提升了推荐性能。

🔬 方法详解

问题定义:本文旨在解决现有大语言模型在推荐任务中整合协同信息的不足,主要痛点在于微调过程可能损害模型的固有知识和推理能力,同时协同特征的加入会破坏文本提示的语义。

核心思路:CoRA的核心思路是通过协同查询生成器将协同信息与LLM的参数空间对齐,而不是直接在输入空间进行调整。这种设计使得模型能够在保持原有知识的基础上,感知协同信息。

技术框架:整体架构包括协同过滤模型用于提取用户和物品的嵌入,将其注入到一组可学习的查询中。然后,将协同查询转换为低秩特性协同权重,并将其与LLM的权重合并。

关键创新:CoRA的创新点在于通过参数空间的对齐而非输入空间的微调,避免了对模型固有知识的损害,并且能够生成个性化推荐。与现有方法相比,CoRA在不需要额外的协同标记的情况下实现了信息的有效整合。

关键设计:在设计中,使用了低秩特性来生成协同权重,确保了模型的高效性和可扩展性。具体的参数设置和损失函数设计未在摘要中详细说明,需参考原文获取更多细节。

🖼️ 关键图片

fig_0
fig_1
fig_2

📊 实验亮点

实验结果显示,CoRA在推荐任务中显著提升了性能,相较于基线方法,推荐准确率提高了XX%,验证了其在协同信息整合方面的有效性。

🎯 应用场景

该研究的潜在应用领域包括个性化推荐系统、社交媒体内容推荐以及电子商务平台的商品推荐等。通过有效整合协同信息,CoRA能够提升用户体验和满意度,具有广泛的实际价值和未来影响。

📄 摘要(原文)

Involving collaborative information in Large Language Models (LLMs) is a promising technique for adapting LLMs for recommendation. Existing methods achieve this by concatenating collaborative features with text tokens into a unified sequence input and then fine-tuning to align these features with LLM's input space. Although effective, in this work, we identify two limitations when adapting LLMs to recommendation tasks, which hinder the integration of general knowledge and collaborative information, resulting in sub-optimal recommendation performance. (1) Fine-tuning LLM with recommendation data can undermine its inherent world knowledge and fundamental competencies, which are crucial for interpreting and inferring recommendation text. (2) Incorporating collaborative features into textual prompts disrupts the semantics of the original prompts, preventing LLM from generating appropriate outputs. In this paper, we propose a new paradigm, \textbf{Co}llaborative \textbf{Lo}RA (CoRA), with a collaborative query generator. Rather than input space alignment, this method aligns collaborative information with LLM's parameter space, representing them as incremental weights to update LLM's output. This way, LLM perceives collaborative information without altering its general knowledge and text inference capabilities. Specifically, we employ a collaborative filtering model to extract user and item embeddings and inject them into a set number of learnable queries. We then convert collaborative queries into collaborative weights with low-rank properties and merge the collaborative weights into LLM's weights, enabling LLM to perceive the collaborative signals and generate personalized recommendations without fine-tuning or extra collaborative tokens in prompts. Extensive experiments confirm that CoRA effectively integrates collaborative information into LLM, enhancing recommendation performance.