TY - GEN
T1 - Thwarting passive privacy attacks in collaborative filtering
AU - Chen, Rui
AU - Xie, Min
AU - Lakshmanan, Laks V.S.
N1 - Copyright:
Copyright 2016 Elsevier B.V., All rights reserved.
PY - 2014
Y1 - 2014
N2 - While recommender systems based on collaborative filtering have become an essential tool to help users access items of interest, it has been indicated that collaborative filtering enables an adversary to perform passive privacy attacks, a type of the most damaging and easy-to-perform privacy attacks. In a passive privacy attack, the dynamic nature of a recommender system allows an adversary with a moderate amount of background knowledge to infer a user's transaction through temporal changes in the public related-item lists (RILs). Unlike the traditional solutions that manipulate the underlying user-item rating matrix, in this paper, we respond to passive privacy attacks by directly anonymizing the RILs, which are the real outputs rendered to an adversary. This fundamental switch allows us to provide a novel rigorous inference-proof privacy guarantee, known as δ-bound, with desirable data utility and scalability. We propose anonymization algorithms based on suppression and a novel mechanism, permutation, tailored to our problem. Experiments on real-life data demonstrate that our solutions are both effective and efficient.
AB - While recommender systems based on collaborative filtering have become an essential tool to help users access items of interest, it has been indicated that collaborative filtering enables an adversary to perform passive privacy attacks, a type of the most damaging and easy-to-perform privacy attacks. In a passive privacy attack, the dynamic nature of a recommender system allows an adversary with a moderate amount of background knowledge to infer a user's transaction through temporal changes in the public related-item lists (RILs). Unlike the traditional solutions that manipulate the underlying user-item rating matrix, in this paper, we respond to passive privacy attacks by directly anonymizing the RILs, which are the real outputs rendered to an adversary. This fundamental switch allows us to provide a novel rigorous inference-proof privacy guarantee, known as δ-bound, with desirable data utility and scalability. We propose anonymization algorithms based on suppression and a novel mechanism, permutation, tailored to our problem. Experiments on real-life data demonstrate that our solutions are both effective and efficient.
UR - http://www.scopus.com/inward/record.url?scp=84958546523&partnerID=8YFLogxK
U2 - 10.1007/978-3-319-05813-9_15
DO - 10.1007/978-3-319-05813-9_15
M3 - Conference proceeding
AN - SCOPUS:84958546523
SN - 9783319058122
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 218
EP - 233
BT - Database Systems for Advanced Applications - 19th International Conference, DASFAA 2014, Proceedings
PB - Springer Verlag
T2 - 19th International Conference on Database Systems for Advanced Applications, DASFAA 2014
Y2 - 21 April 2014 through 24 April 2014
ER -