TY - JOUR
T1 - Privacy-Preserving Stochastic Gradual Learning
AU - Han, Bo
AU - Tsang, Ivor W.
AU - Xiao, Xiaokui
AU - Chen, Ling
AU - Fung, Sai Fu
AU - Yu, Celina P.
N1 - Funding Information:
Prof. Ivor W. Tsang was supported in part by ARC LP150100671, DP180100106, and DP200101328. Prof. Xiaokui Xiao was supported in part by MOE, Singapore under Grant MOE2015-T2-2-069, and by NUS, Singapore under an SUG. Prof. Ling Chen was supported by ARC DP180100966. Dr. Bo Han was supported in part by HKBU Tier-1 Start-up Grant and HKBU CSD Start-up Grant.
Publisher Copyright:
© 2020 IEEE.
PY - 2021/8/1
Y1 - 2021/8/1
N2 - It is challenging for stochastic optimization to handle large-scale sensitive data safely. Duchi et al. recently proposed a private sampling strategy to solve privacy leakage in stochastic optimization. However, this strategy leads to a degeneration in robustness, since this strategy is equal to noise injection on each gradient, which adversely affects updates of the primal variable. To address this challenge, we introduce a robust stochastic optimization under the framework of local privacy, which is called Privacy-pREserving StochasTIc Gradual lEarning (PRESTIGE). PRESTIGE bridges private updates of the primal variable (by private sampling) with gradual curriculum learning (CL). The noise injection leads to similar issue from label noise, but the robust learning process of CL can combat with label noise. Thus, PRESTIGE yields "private but robust" updates of the primal variable on the curriculum, that is, a reordered label sequence provided by CL. In theory, we reveal the convergence rate and maximum complexity of PRESTIGE. Empirical results on six datasets show that PRESTIGE achieves a good tradeoff between privacy preservation and robustness over baselines.
AB - It is challenging for stochastic optimization to handle large-scale sensitive data safely. Duchi et al. recently proposed a private sampling strategy to solve privacy leakage in stochastic optimization. However, this strategy leads to a degeneration in robustness, since this strategy is equal to noise injection on each gradient, which adversely affects updates of the primal variable. To address this challenge, we introduce a robust stochastic optimization under the framework of local privacy, which is called Privacy-pREserving StochasTIc Gradual lEarning (PRESTIGE). PRESTIGE bridges private updates of the primal variable (by private sampling) with gradual curriculum learning (CL). The noise injection leads to similar issue from label noise, but the robust learning process of CL can combat with label noise. Thus, PRESTIGE yields "private but robust" updates of the primal variable on the curriculum, that is, a reordered label sequence provided by CL. In theory, we reveal the convergence rate and maximum complexity of PRESTIGE. Empirical results on six datasets show that PRESTIGE achieves a good tradeoff between privacy preservation and robustness over baselines.
KW - differential privacy
KW - robustness
KW - Stochastic optimization
UR - http://www.scopus.com/inward/record.url?scp=85112035197&partnerID=8YFLogxK
U2 - 10.1109/TKDE.2020.2963977
DO - 10.1109/TKDE.2020.2963977
M3 - Journal article
AN - SCOPUS:85112035197
SN - 1041-4347
VL - 33
SP - 3129
EP - 3140
JO - IEEE Transactions on Knowledge and Data Engineering
JF - IEEE Transactions on Knowledge and Data Engineering
IS - 8
ER -