Investigating User-Side Fairness in Outcome and Process for Multi-Type Sensitive Attributes in Recommendations

Weixin Chen, Li Chen, Yuhan Zhao

Research output: Contribution to journalJournal articlepeer-review

Abstract

Recommender systems have been popularly employed to address information overload problems for users, for which unfairness issues are essential to be mitigated. Current user-side fairness studies in recommendations aim to ensure the independence of users’ sensitive attributes in terms of outcome or process. Specifically, the former emphasizes the outcome metric equity across user groups based on sensitive attributes, for which regularization has been proposed as a typical approach to reducing the recommendation imbalance among them. The latter process fairness focuses on independence between sensitive attributes and recommendations in process, for which adversarial learning has been popularly adopted to remove sensitive information from user representation for individual users. However, little work has investigated the effects of these methods on balancing both group-level outcome fairness and individual-level process fairness in a particular scenario. Moreover, existing experiments have primarily been performed on one type of user attributes, e.g., behavioral attributes or demographic attributes, but neglected other possible sensitive attributes such as psychological attributes. In this paper, we have investigated both outcome and process fairness performance of regularization and adversarial learning methods over multiple types of sensitive attributes including behavioral (e.g., activity level), demographic (e.g., gender, age), and psychological attributes (e.g., big-five personality, curiosity). The experiments on four datasets show that unfairness can exist in different forms for different types of sensitive attributes, and relative to the regularization-based method, adversarial learning owns a higher potential to achieve a balance between outcome and process user-side fairness in recommendations. Our source code is available at https://github.com/WeixinChen98/OtPrFairness-MultiAttr.
Original languageEnglish
Number of pages29
JournalACM Transactions on Recommender Systems
DOIs
Publication statusE-pub ahead of print - 24 Apr 2025

User-Defined Keywords

  • Fairness
  • Recommendations
  • Behavioral Attributes
  • Demographic Attributes
  • Psychological Attributes
  • Outcome Fairness
  • Process Fairness
  • Regularization
  • Adversarial Learning
  • Recommender Systems

Fingerprint

Dive into the research topics of 'Investigating User-Side Fairness in Outcome and Process for Multi-Type Sensitive Attributes in Recommendations'. Together they form a unique fingerprint.

Cite this