TY - JOUR
T1 - Differentially private stochastic gradient descent with low-noise
AU - Wang, Puyu
AU - Lei, Yunwen
AU - Ying, Yiming
AU - Zhou, Ding Xuan
N1 - The work described in this paper is partially done when the first and the last author, Puyu Wang and Ding-Xuan Zhou, worked at City University of Hong Kong. Ding-Xuan Zhou’s work is supported by the Laboratory for AI-Powered Financial Technologies under the InnoHK scheme, the Research Grants Council of Hong Kong [Projects No. CityU 11308121, No. N_CityU102/20, and No. C1013-21GF], the National Science Foundation of China [Project No. 12061160462], and the Hong Kong Institute for Data Science. Yiming’s work is supported by National Science Foundation grants (IIS-2103450, IIS-2110546 and DMS-2110836). The work of Yunwen Lei is partially supported by the Research Grants Council of Hong Kong [Project No. 22303723].
Publisher Copyright:
© 2024 The Author(s)
PY - 2024/6/7
Y1 - 2024/6/7
N2 - Modern machine learning algorithms aim to extract fine-grained information from data to provide accurate predictions, which often conflicts with the goal of privacy protection. This paper addresses the practical and theoretical importance of developing privacy-preserving machine learning algorithms that ensure good performance while preserving privacy. In this paper, we focus on the privacy and utility (measured by excess risk bounds) performances of differentially private stochastic gradient descent (SGD) algorithms in the setting of stochastic convex optimization. Specifically, we examine the pointwise problem in the low-noise setting for which we derive sharper excess risk bounds for the differentially private SGD algorithm. In the pairwise learning setting, we propose a simple differentially private SGD algorithm based on gradient perturbation. Furthermore, we develop novel utility bounds for the proposed algorithm, proving that it achieves optimal excess risk rates even for non-smooth losses. Notably, we establish fast learning rates for privacy-preserving pairwise learning under the low-noise condition, which is the first of its kind.
AB - Modern machine learning algorithms aim to extract fine-grained information from data to provide accurate predictions, which often conflicts with the goal of privacy protection. This paper addresses the practical and theoretical importance of developing privacy-preserving machine learning algorithms that ensure good performance while preserving privacy. In this paper, we focus on the privacy and utility (measured by excess risk bounds) performances of differentially private stochastic gradient descent (SGD) algorithms in the setting of stochastic convex optimization. Specifically, we examine the pointwise problem in the low-noise setting for which we derive sharper excess risk bounds for the differentially private SGD algorithm. In the pairwise learning setting, we propose a simple differentially private SGD algorithm based on gradient perturbation. Furthermore, we develop novel utility bounds for the proposed algorithm, proving that it achieves optimal excess risk rates even for non-smooth losses. Notably, we establish fast learning rates for privacy-preserving pairwise learning under the low-noise condition, which is the first of its kind.
KW - Differential privacy
KW - Generalization
KW - Low-noise
KW - Stochastic gradient descent
UR - http://www.scopus.com/inward/record.url?scp=85189760691&partnerID=8YFLogxK
U2 - 10.1016/j.neucom.2024.127557
DO - 10.1016/j.neucom.2024.127557
M3 - Journal article
AN - SCOPUS:85189760691
SN - 0925-2312
VL - 585
JO - Neurocomputing
JF - Neurocomputing
M1 - 127557
ER -