Abstract
Correntropy-based learning has achieved great success in practice during the last decades. It is originated from information-theoretic learning and provides an alternative to classical least squares method in the presence of non-Gaussian noise. In this paper, we investigate the theoretical properties of learning algorithms generated by Tikhonov regularization schemes associated with Gaussian kernels and correntropy loss. By choosing an appropriate scale parameter of Gaussian kernel, we show the polynomial decay of approximation error under a Sobolev smoothness condition. In addition, we employ a tight upper bound for the uniform covering number of Gaussian RKHS in order to improve the estimate of sample error. Based on these two results, we show that the proposed algorithm using varying Gaussian kernel achieves the minimax rate of convergence (up to a logarithmic factor) without knowing the smoothness level of the regression function.
Original language | English |
---|---|
Pages (from-to) | 107-124 |
Number of pages | 18 |
Journal | Analysis and Applications |
Volume | 19 |
Issue number | 1 |
Early online date | 13 Dec 2019 |
DOIs | |
Publication status | Published - Jan 2021 |
Scopus Subject Areas
- Analysis
- Applied Mathematics
User-Defined Keywords
- Convergence rate
- correntropy loss
- Gaussian kernels
- minimax optimality
- reproducing kernel Hilbert space