Interpretable Deep Generative Recommendation Models

Huafeng Liu, Liping Jing*, Jingxuan Wen, Pengyu Xu, Jiaqi Wang, Jian Yu, Michael K. Ng

*Corresponding author for this work

Research output: Contribution to journalJournal articlepeer-review

9 Citations (Scopus)

Abstract

User preference modeling in recommendation system aims to improve customer experience through discovering users’ intrinsic preference based on prior user behavior data. This is a challenging issue because user preferences usually have complicated structure, such as inter-user preference similarity and intra-user preference diversity. Among them, inter-user similarity indicates different users may share similar preference, while intra-user diversity indicates one user may have several preferences. In literatures, deep generative models have been successfully applied in recommendation systems due to its flexibility on statistical distributions and strong ability for non-linear representation learning. However, they suffer from the simple generative process when handling complex user preferences. Meanwhile, the latent representations learned by deep generative models are usually entangled, and may range from observed-level ones that dominate the complex correlations between users, to latent-level ones that characterize a user’s preference, which makes the deep model hard to explain and unfriendly for recommendation. Thus, in this paper, we propose an Interpretable Deep Generative Recommendation Model (InDGRM) to characterize inter-user preference similarity and intra-user preference diversity, which will simultaneously disentangle the learned representation from observed-level and latent-level. In InDGRM, the observed-level disentanglement on users is achieved by modeling the user-cluster structure (i.e., inter-user preference similarity) in a rich multimodal space, so that users with similar preferences are assigned into the same cluster. The observed-level disentanglement on items is achieved by modeling the intra-user preference diversity in a prototype learning strategy, where different user intentions are captured by item groups (one group refers to one intention). To promote disentangled latent representations, InDGRM adopts structure and sparsity-inducing penalty and integrates them into the generative procedure, which has ability to enforce each latent factor focus on a limited subset of items (e.g., one item group) and benefit latent-level disentanglement. Meanwhile, it can be efficiently inferred by minimizing its penalized upper bound with the aid of local variational optimization technique. Theoretically, we analyze the generalization error bound of InDGRM to guarantee its performance. A series of experimental results on four widely-used benchmark datasets demonstrates the superiority of InDGRM on recommendation performance and interpretability.

Original languageEnglish
Pages (from-to)1-54
Number of pages54
JournalJournal of Machine Learning Research
Volume22
Publication statusPublished - Aug 2021

Scopus Subject Areas

  • Control and Systems Engineering
  • Software
  • Statistics and Probability
  • Artificial Intelligence

User-Defined Keywords

  • Collaborative filtering
  • Deep generative model
  • Interpretable machine learning
  • Latent factor model
  • Recommendation system

Fingerprint

Dive into the research topics of 'Interpretable Deep Generative Recommendation Models'. Together they form a unique fingerprint.

Cite this