Abstract
In modern recommender systems, there are usually comments or reviews from users that justify their ratings for different items. Trained on such textual corpus, explainable recommendation models learn to discover user interests and generate personalized explanations. Though able to provide plausible explanations, existing models tend to generate repeated sentences for different items or empty sentences with insufficient details. This begs an interesting question: can we immerse the models in a multimodal environment to gain proper awareness of real-world concepts and alleviate above shortcomings? To this end, we propose a visuallyenhanced approach named METER with the help of visualization generation and text–image matching discrimination: the explainable recommendation model is encouraged to visualize what it refers to while incurring a penalty if the visualization is incongruent with the textual explanation. Experimental results and a manual assessment demonstrate that our approach can improve not only the text quality but also the diversity and explainability of the generated explanations.
Original language | English |
---|---|
Title of host publication | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) |
Publisher | Association for Computational Linguistics (ACL) |
Pages | 244-255 |
Number of pages | 12 |
ISBN (Print) | 9781955917216 |
DOIs | |
Publication status | Published - May 2022 |
Event | 60th Annual Meeting of the Association for Computational Linguistics, ACL 2022 - Dublin, Ireland Duration: 22 May 2022 → 27 May 2022 https://www.2022.aclweb.org/ |
Conference
Conference | 60th Annual Meeting of the Association for Computational Linguistics, ACL 2022 |
---|---|
Country/Territory | Ireland |
City | Dublin |
Period | 22/05/22 → 27/05/22 |
Internet address |