Unleashing the Retrieval Potential of Large Language Models in Conversational Recommender Systems

Ting Yang, Li Chen*

*Corresponding author for this work

Research output: Chapter in book/report/conference proceedingConference proceedingpeer-review

1 Citation (Scopus)

Abstract

Conversational recommender systems (CRSs) aim to capture user preferences and provide personalized recommendations through interactive natural language interaction. The recent advent of large language models (LLMs) has revolutionized human engagement in natural conversation, driven by their extensive world knowledge and remarkable natural language understanding and generation capabilities. However, introducing LLMs into CRSs presents new technical challenges. Directly prompting LLMs for recommendation generation requires understanding a large and evolving item corpus, as well as grounding the generated recommendations in the real item space. On the other hand, generating recommendations based on external recommendation engines or directly integrating their suggestions into responses may constrain the overall performance of LLMs, since these engines generally have inferior representation abilities compared to LLMs. To address these challenges, we propose an end-to-end large-scale CRS model, named as ReFICR, a novel LLM-enhanced conversational recommender that empowers a retrievable large language model to perform conversational recommendation by following retrieval and generation instructions through lightweight tuning. By decomposing the complex CRS task into multiple subtasks, we formulate these subtasks into two types of instruction formats: retrieval and generation. The hidden states of ReFICR are utilized for generating text embeddings for retrieval, and simultaneously ReFICR is fine-tuned to handle generation subtasks. We optimize the contrastive objective to enhance text embeddings for retrieval and jointly fine-tune the large language model objective for generation. Our experimental results on public datasets demonstrate that ReFICR significantly outperforms baselines in terms of recommendation accuracy and response quality. Our code is publicly available at the link: https://github.com/yt556677/ReFICR.
Original languageEnglish
Title of host publicationRecSys '24: Proceedings of the 18th ACM Conference on Recommender Systems
EditorsTommaso Di Noia, Pasquale Lops, Thorsten Joachims, Katrien Verbert, Pablo Castells, Zhenhua Dong, Ben London
PublisherAssociation for Computing Machinery (ACM)
Pages43-52
Number of pages10
ISBN (Electronic)9798400705052
ISBN (Print)9798400705052
DOIs
Publication statusPublished - 8 Oct 2024

Publication series

NameRecSys 2024 - Proceedings of the 18th ACM Conference on Recommender Systems

User-Defined Keywords

  • Conversational Recommender Systems
  • Instruction Tuning
  • Retrievable Large Language Models

Fingerprint

Dive into the research topics of 'Unleashing the Retrieval Potential of Large Language Models in Conversational Recommender Systems'. Together they form a unique fingerprint.

Cite this