Abstract
We consider stochastic convex optimization for heavy-tailed data with the guarantee of being differentially private (DP). Most prior works on differentially private stochastic convex optimization for heavy-tailed data are either restricted to gradient descent (GD) or performed multi-times clipping on stochastic gradient descent (SGD), which is inefficient for large-scale problems. In this paper, we consider a one-time clipping strategy and provide principled analyses of its bias and private mean estimation. We establish new convergence results and improved complexity bounds for the proposed algorithm called AClipped-dpSGD for constrained and unconstrained convex problems. We also extend our convergent analysis to the strongly convex case and non-smooth case (which works for generalized smooth objectives with Ho¨lder-continuous gradients). All the above results are guaranteed with a high probability for heavy-tailed data. Numerical experiments are conducted to justify the theoretical improvement.
| Original language | English |
|---|---|
| Pages (from-to) | 8487-8532 |
| Number of pages | 46 |
| Journal | Machine Learning |
| Volume | 113 |
| Issue number | 11 |
| Early online date | 30 Sept 2024 |
| DOIs | |
| Publication status | Published - Dec 2024 |
UN SDGs
This output contributes to the following UN Sustainable Development Goals (SDGs)
-
SDG 9 Industry, Innovation, and Infrastructure
User-Defined Keywords
- Convergence analysis
- Convex optimization
- Differential privacy
- Gradient clipping
- Heavy-tailed data
- Stochastic gradient descent
Fingerprint
Dive into the research topics of 'Efficient private SCO for heavy-tailed data via averaged clipping'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver