TY - JOUR
T1 - Investigating User-Side Fairness in Outcome and Process for Multi-Type Sensitive Attributes in Recommendations
AU - Chen, Weixin
AU - Chen, Li
AU - Zhao, Yuhan
N1 - This work is supported by Hong Kong Baptist University IG-FNRA project (RC-FNRA-IG/21-22/SCI/01), Key Research Partnership Scheme (KRPS/23-24/02), and NSFC/RGC Joint Research Scheme (N_HKBU214/24).
Publisher copyright:
© 2025 Copyright held by the owner/author(s).
PY - 2025/4/24
Y1 - 2025/4/24
N2 - Recommender systems have been popularly employed to address information overload problems for users, for which unfairness issues are essential to be mitigated. Current user-side fairness studies in recommendations aim to ensure the independence of users’ sensitive attributes in terms of outcome or process. Specifically, the former emphasizes the outcome metric equity across user groups based on sensitive attributes, for which regularization has been proposed as a typical approach to reducing the recommendation imbalance among them. The latter process fairness focuses on independence between sensitive attributes and recommendations in process, for which adversarial learning has been popularly adopted to remove sensitive information from user representation for individual users. However, little work has investigated the effects of these methods on balancing both group-level outcome fairness and individual-level process fairness in a particular scenario. Moreover, existing experiments have primarily been performed on one type of user attributes, e.g., behavioral attributes or demographic attributes, but neglected other possible sensitive attributes such as psychological attributes. In this paper, we have investigated both outcome and process fairness performance of regularization and adversarial learning methods over multiple types of sensitive attributes including behavioral (e.g., activity level), demographic (e.g., gender, age), and psychological attributes (e.g., big-five personality, curiosity). The experiments on four datasets show that unfairness can exist in different forms for different types of sensitive attributes, and relative to the regularization-based method, adversarial learning owns a higher potential to achieve a balance between outcome and process user-side fairness in recommendations. Our source code is available at https://github.com/WeixinChen98/OtPrFairness-MultiAttr.
AB - Recommender systems have been popularly employed to address information overload problems for users, for which unfairness issues are essential to be mitigated. Current user-side fairness studies in recommendations aim to ensure the independence of users’ sensitive attributes in terms of outcome or process. Specifically, the former emphasizes the outcome metric equity across user groups based on sensitive attributes, for which regularization has been proposed as a typical approach to reducing the recommendation imbalance among them. The latter process fairness focuses on independence between sensitive attributes and recommendations in process, for which adversarial learning has been popularly adopted to remove sensitive information from user representation for individual users. However, little work has investigated the effects of these methods on balancing both group-level outcome fairness and individual-level process fairness in a particular scenario. Moreover, existing experiments have primarily been performed on one type of user attributes, e.g., behavioral attributes or demographic attributes, but neglected other possible sensitive attributes such as psychological attributes. In this paper, we have investigated both outcome and process fairness performance of regularization and adversarial learning methods over multiple types of sensitive attributes including behavioral (e.g., activity level), demographic (e.g., gender, age), and psychological attributes (e.g., big-five personality, curiosity). The experiments on four datasets show that unfairness can exist in different forms for different types of sensitive attributes, and relative to the regularization-based method, adversarial learning owns a higher potential to achieve a balance between outcome and process user-side fairness in recommendations. Our source code is available at https://github.com/WeixinChen98/OtPrFairness-MultiAttr.
KW - Fairness
KW - Recommendations
KW - Behavioral Attributes
KW - Demographic Attributes
KW - Psychological Attributes
KW - Outcome Fairness
KW - Process Fairness
KW - Regularization
KW - Adversarial Learning
KW - Recommender Systems
U2 - 10.1145/3731568
DO - 10.1145/3731568
M3 - Journal article
SN - 2770-6699
JO - ACM Transactions on Recommender Systems
JF - ACM Transactions on Recommender Systems
ER -