RMR: A Relative Membership Risk Measure for Machine Learning Models

Li Bai, Haibo Hu*, Qingqing Ye, Jianliang Xu, Jin Li, Chengfang Fang, Jie Shi

*Corresponding author for this work

Research output: Contribution to journalJournal articlepeer-review

Abstract

Privacy leakage poses a significant threat when machine learning foundation models trained on private data are released. One such threat is membership inference attacks (MIA), which determine whether a specific example was included in a model's training set. This paper shifts focus from developing new MIA algorithms to measuring a model's risk under MIA. We introduce a novel metric, Relative Membership Risk (RMR), which assesses a model's MIA vulnerability from a comparative standpoint. RMR calculates the difference in prediction loss for training examples relative to a predefined reference model, enabling risk comparison across models without needing to delve into details like training strategy, architecture, or data distribution. We also explore the selection of the reference model and show that using a high-risk reference model enhances the accuracy of the RMR measure. To identify the most vulnerable reference model, we propose an efficient iterative algorithm that selects the optimal model from a set of candidates. Through extensive empirical evaluations on various datasets and network architectures, we demonstrate that RMR is an accurate and efficient tool for measuring the membership privacy risk of both individual training examples and the overall machine learning model.
Original languageEnglish
Pages (from-to)1-12
Number of pages12
JournalIEEE Transactions on Dependable and Secure Computing
DOIs
Publication statusE-pub ahead of print - 18 Mar 2025

User-Defined Keywords

  • Machine learning
  • membership inference attack
  • privacy leakage

Fingerprint

Dive into the research topics of 'RMR: A Relative Membership Risk Measure for Machine Learning Models'. Together they form a unique fingerprint.

Cite this