High-probability generalization bounds for pointwise uniformly stable algorithms

Jun Fan, Yunwen Lei*

*Corresponding author for this work

Research output: Contribution to journalJournal articlepeer-review

1 Citation (Scopus)


Algorithmic stability is a fundamental concept in statistical learning theory to understand the generalization behavior of optimization algorithms. Existing high-probability bounds are developed for the generalization gap as measured by function values and require the algorithm to be uniformly stable. In this paper, we introduce a novel stability measure called pointwise uniform stability by considering the sensitivity of the algorithm with respect to the perturbation of each training example. We show this weaker pointwise uniform stability guarantees almost optimal bounds, and gives the first high-probability bound for the generalization gap as measured by gradients. Sharper bounds are given for strongly convex and smooth problems. We further apply our general result to derive improved generalization bounds for stochastic gradient descent. As a byproduct, we develop concentration inequalities for a summation of weakly-dependent vector-valued random variables.

Original languageEnglish
Article number101632
JournalApplied and Computational Harmonic Analysis
Early online date27 Jan 2024
Publication statusPublished - May 2024

Scopus Subject Areas

  • Applied Mathematics

User-Defined Keywords

  • Algorithmic stability
  • Generalization analysis
  • Learning theory
  • Stochastic gradient descent


Dive into the research topics of 'High-probability generalization bounds for pointwise uniformly stable algorithms'. Together they form a unique fingerprint.

Cite this