Abstract
In this paper, we introduce a learning algorithm, boosted kernel ridge regression (BKRR), that combines L2-Boosting with the kernel ridge regression (KRR). We analyze the learning performance of this algorithm in the framework of learning theory. We show that BKRR provides a new bias-variance trade-off via tuning the number of boosting iterations, which is different from KRR via adjusting the regularization parameter. A (semi-)exponential bias-variance trade-off is derived for BKRR, exhibiting a stable relationship between the generalization error and the number of iterations. Furthermore, an adaptive stopping rule is proposed, with which BKRR achieves the optimal learning rate without saturation.
Original language | English |
---|---|
Number of pages | 36 |
Journal | Journal of Machine Learning Research |
Volume | 20 |
Publication status | Published - Feb 2019 |
Scopus Subject Areas
- Software
- Control and Systems Engineering
- Statistics and Probability
- Artificial Intelligence
User-Defined Keywords
- Boosting
- Integral operator
- Kernel ridge regression
- Learning theory