Boosted Kernel Ridge Regression: Optimal Learning Rates and Early Stopping

Shao Bo Lin, Yunwen Lei*, Ding Xuan Zhou

*Corresponding author for this work

Research output: Contribution to journalJournal articlepeer-review

18 Citations (Scopus)

Abstract

In this paper, we introduce a learning algorithm, boosted kernel ridge regression (BKRR), that combines L2-Boosting with the kernel ridge regression (KRR). We analyze the learning performance of this algorithm in the framework of learning theory. We show that BKRR provides a new bias-variance trade-off via tuning the number of boosting iterations, which is different from KRR via adjusting the regularization parameter. A (semi-)exponential bias-variance trade-off is derived for BKRR, exhibiting a stable relationship between the generalization error and the number of iterations. Furthermore, an adaptive stopping rule is proposed, with which BKRR achieves the optimal learning rate without saturation.

Original languageEnglish
Number of pages36
JournalJournal of Machine Learning Research
Volume20
Publication statusPublished - Feb 2019

Scopus Subject Areas

  • Software
  • Control and Systems Engineering
  • Statistics and Probability
  • Artificial Intelligence

User-Defined Keywords

  • Boosting
  • Integral operator
  • Kernel ridge regression
  • Learning theory

Fingerprint

Dive into the research topics of 'Boosted Kernel Ridge Regression: Optimal Learning Rates and Early Stopping'. Together they form a unique fingerprint.

Cite this