Privacy-Preserving Stochastic Gradual Learning

Bo Han, Ivor W. Tsang*, Xiaokui Xiao, Ling Chen, Sai Fu Fung, Celina P. Yu

*Corresponding author for this work

Research output: Contribution to journalJournal articlepeer-review

5 Citations (Scopus)

Abstract

It is challenging for stochastic optimization to handle large-scale sensitive data safely. Duchi et al. recently proposed a private sampling strategy to solve privacy leakage in stochastic optimization. However, this strategy leads to a degeneration in robustness, since this strategy is equal to noise injection on each gradient, which adversely affects updates of the primal variable. To address this challenge, we introduce a robust stochastic optimization under the framework of local privacy, which is called Privacy-pREserving StochasTIc Gradual lEarning (PRESTIGE). PRESTIGE bridges private updates of the primal variable (by private sampling) with gradual curriculum learning (CL). The noise injection leads to similar issue from label noise, but the robust learning process of CL can combat with label noise. Thus, PRESTIGE yields "private but robust" updates of the primal variable on the curriculum, that is, a reordered label sequence provided by CL. In theory, we reveal the convergence rate and maximum complexity of PRESTIGE. Empirical results on six datasets show that PRESTIGE achieves a good tradeoff between privacy preservation and robustness over baselines.

Original languageEnglish
Pages (from-to)3129-3140
Number of pages12
JournalIEEE Transactions on Knowledge and Data Engineering
Volume33
Issue number8
Early online date3 Jan 2020
DOIs
Publication statusPublished - 1 Aug 2021

Scopus Subject Areas

  • Information Systems
  • Computer Science Applications
  • Computational Theory and Mathematics

User-Defined Keywords

  • differential privacy
  • robustness
  • Stochastic optimization

Fingerprint

Dive into the research topics of 'Privacy-Preserving Stochastic Gradual Learning'. Together they form a unique fingerprint.

Cite this