Bilateral Dependency Optimization: Defending Against Model-inversion Attacks

Xiong Peng, Feng Liu, Jingfeng Zhang, Long Lan*, Junjie Ye, Tongliang Liu, Bo Han*

*Corresponding author for this work

Research output: Chapter in book/report/conference proceedingConference proceedingpeer-review

10 Citations (Scopus)

Abstract

Through using only a well-trained classifier, model-inversion (MI) attacks can recover the data used for training the classifier, leading to the privacy leakage of the training data. To defend against MI attacks, previous work utilizes a unilateral dependency optimization strategy, i.e., minimizing the dependency between inputs (i.e., features) and outputs (i.e., labels) during training the classifier. However, such a minimization process conflicts with minimizing the supervised loss that aims to maximize the dependency between inputs and outputs, causing an explicit trade-off between model robustness against MI attacks and model utility on classification tasks. In this paper, we aim to minimize the dependency between the latent representations and the inputs while maximizing the dependency between latent representations and the outputs, named a bilateral dependency optimization (BiDO) strategy. In particular, we use the dependency constraints as a universally applicable regularizer in addition to commonly used losses for deep neural networks (e.g., cross-entropy), which can be instantiated with appropriate dependency criteria according to different tasks. To verify the efficacy of our strategy, we propose two implementations of BiDO, by using two different dependency measures: BiDO with constrained covariance (BiDO-COCO) and BiDO with Hilbert-Schmidt Independence Criterion (BiDO-HSIC). Experiments show that BiDO achieves the state-of-the-art defense performance for a variety of datasets, classifiers, and MI attacks while suffering a minor classification-accuracy drop compared to the well-trained classifier with no defense, which lights up a novel road to defend against MI attacks.
Original languageEnglish
Title of host publicationKDD '22: Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining
PublisherAssociation for Computing Machinery (ACM)
Pages1358–1367
Number of pages10
ISBN (Print)9781450393850
DOIs
Publication statusPublished - 14 Aug 2022
Event28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, KDD 2022 - Washington, DC, United States
Duration: 14 Aug 202218 Aug 2022
https://kdd.org/kdd2022/index.html
https://dl.acm.org/doi/proceedings/10.1145/3534678

Conference

Conference28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, KDD 2022
Country/TerritoryUnited States
CityWashington, DC
Period14/08/2218/08/22
Internet address

User-Defined Keywords

  • Deep neural networks
  • model-inversion attacks
  • privacy leakage
  • statistical dependency

Fingerprint

Dive into the research topics of 'Bilateral Dependency Optimization: Defending Against Model-inversion Attacks'. Together they form a unique fingerprint.

Cite this