Early Stopping for Iterative Regularization with General Loss Functions

Ting Hu, Yunwen Lei*

*Corresponding author for this work

Research output: Contribution to journalJournal articlepeer-review

1 Citation (Scopus)

Abstract

In this paper, we investigate the early stopping strategy for the iterative regularization technique, which is based on gradient descent of convex loss functions in reproducing kernel Hilbert spaces without an explicit regularization term. This work shows that projecting the last iterate of the stopping time produces an estimator that can improve the generalization ability. Using the upper bound of the generalization errors, we establish a close link between the iterative regularization and Tikhonov regularization scheme and explain theoretically why the two schemes have similar regularization paths in the existing numerical simulations. We introduce a data-dependent way based on cross-validation to select the stopping time. We prove that the a-posteriori selection way can retain the comparable generalization errors to those obtained by our stopping rules with a-prior parameters.
Original languageEnglish
Number of pages36
JournalJournal of Machine Learning Research
Volume23
Publication statusPublished - Aug 2022

User-Defined Keywords

  • iterative regularization
  • early stopping
  • reproducing kernel Hilbert spaces
  • stopping rule
  • cross-validation

Fingerprint

Dive into the research topics of 'Early Stopping for Iterative Regularization with General Loss Functions'. Together they form a unique fingerprint.

Cite this