Optimal learning with Gaussians and correntropy loss

Fusheng Lv, Jun Fan*

*Corresponding author for this work

Research output: Contribution to journalJournal articlepeer-review

26 Citations (Scopus)

Abstract

Correntropy-based learning has achieved great success in practice during the last decades. It is originated from information-theoretic learning and provides an alternative to classical least squares method in the presence of non-Gaussian noise. In this paper, we investigate the theoretical properties of learning algorithms generated by Tikhonov regularization schemes associated with Gaussian kernels and correntropy loss. By choosing an appropriate scale parameter of Gaussian kernel, we show the polynomial decay of approximation error under a Sobolev smoothness condition. In addition, we employ a tight upper bound for the uniform covering number of Gaussian RKHS in order to improve the estimate of sample error. Based on these two results, we show that the proposed algorithm using varying Gaussian kernel achieves the minimax rate of convergence (up to a logarithmic factor) without knowing the smoothness level of the regression function.

Original languageEnglish
Pages (from-to)107-124
Number of pages18
JournalAnalysis and Applications
Volume19
Issue number1
Early online date13 Dec 2019
DOIs
Publication statusPublished - Jan 2021

Scopus Subject Areas

  • Analysis
  • Applied Mathematics

User-Defined Keywords

  • Convergence rate
  • correntropy loss
  • Gaussian kernels
  • minimax optimality
  • reproducing kernel Hilbert space

Fingerprint

Dive into the research topics of 'Optimal learning with Gaussians and correntropy loss'. Together they form a unique fingerprint.

Cite this