Stochastic Gradient Descent for Nonconvex Learning without Bounded Gradient Assumptions

Yunwen Lei, Ting Hu, Guiying Li*, Ke Tang

*Corresponding author for this work

Research output: Contribution to journalJournal articlepeer-review

57 Citations (Scopus)

Abstract

Stochastic gradient descent (SGD) is a popular and efficient method with wide applications in training deep neural nets and other nonconvex models. While the behavior of SGD is well understood in the convex learning setting, the existing theoretical results for SGD applied to nonconvex objective functions are far from mature. For example, existing results require to impose a nontrivial assumption on the uniform boundedness of gradients for all iterates encountered in the learning process, which is hard to verify in practical implementations. In this article, we establish a rigorous theoretical foundation for SGD in nonconvex learning by showing that this boundedness assumption can be removed without affecting convergence rates, and relaxing the standard smoothness assumption to Hölder continuity of gradients. In particular, we establish sufficient conditions for almost sure convergence as well as optimal convergence rates for SGD applied to both general nonconvex and gradient-dominated objective functions. A linear convergence is further derived in the case with zero variances.

Original languageEnglish
Pages (from-to)4394-4400
Number of pages7
JournalIEEE Transactions on Neural Networks and Learning Systems
Volume31
Issue number10
Early online date11 Dec 2019
DOIs
Publication statusPublished - Oct 2020

Scopus Subject Areas

  • Software
  • Computer Science Applications
  • Computer Networks and Communications
  • Artificial Intelligence

User-Defined Keywords

  • Learning theory
  • nonconvex optimization
  • Polyak–Łojasiewicz condition
  • stochastic gradient descent (SGD)

Fingerprint

Dive into the research topics of 'Stochastic Gradient Descent for Nonconvex Learning without Bounded Gradient Assumptions'. Together they form a unique fingerprint.

Cite this