Learning Theory of Randomized Sparse Kaczmarz Method

Yunwen Lei, Ding Xuan Zhou*

*Corresponding author for this work

Research output: Contribution to journalJournal articlepeer-review

10 Citations (Scopus)

Abstract

In this paper we propose an online learning algorithm, a general randomized sparse Kaczmarz method, for generating sparse approximate solutions to linear systems and present learning theory analysis for its convergence. Under a mild assumption covering the case of noisy random measurements in the sampling process or nonlinear regression function, we show that the algorithm converges in expectation if and only if the step size sequence {ηt}tεℕ satisfies limt→∞ηt= 0 and ∑ t=1ηt = ∞. Convergence rates are also obtained and linear convergence is shown to be impossible under the assumption of positive variance of the sampling process. A sufficient condition for almost sure convergence is derived with an additional restriction ∑ t=1η2 t<∞. Our novel analysis is performed by interpreting the randomized sparse Kaczmarz method as a special online mirror descent algorithm with a nondifferentiable mirror map and using the Bregman distance. The sufficient and necessary conditions are derived by establishing a restricted variant of strong convexity for the involved generalization error and using the special structures of the soft-thresholding operator.

Original languageEnglish
Pages (from-to)547-574
Number of pages28
JournalSIAM Journal on Imaging Sciences
Volume11
Issue number1
DOIs
Publication statusPublished - Jan 2018

Scopus Subject Areas

  • Mathematics(all)
  • Applied Mathematics

User-Defined Keywords

  • Bregman distance
  • Learning theory
  • Linearized Bregman iteration
  • Online learning
  • Randomized sparse Kaczmarz algorithm

Fingerprint

Dive into the research topics of 'Learning Theory of Randomized Sparse Kaczmarz Method'. Together they form a unique fingerprint.

Cite this