Abstract
Support vector machines regression (SVMR) is an important tool in many machine learning applications. In this paper, we focus on the theoretical understanding of SVMR based on the ϵ−insensitive loss. For fixed ϵ ≥ 0 and general data generating distributions, we show that the minimizer of the expected risk for ϵ−insensitive loss used in SVMR is a set-valued function called conditional ϵ−median. We then establish a calibration inequality of ϵ−insensitive loss under a noise condition on the conditional distributions. This inequality also ensures us to present a nontrivial variance-expectation bound for ϵ−insensitive loss, and which is known to be important in statistical analysis of the regularized learning algorithms. With the help of the calibration inequality and variance-expectation bound, we finally derive an explicit learning rate for SVMR in some L r −space.
Original language | English |
---|---|
Pages (from-to) | 2111-2129 |
Number of pages | 19 |
Journal | Journal of the Franklin Institute |
Volume | 356 |
Issue number | 4 |
DOIs | |
Publication status | Published - Mar 2019 |
Scopus Subject Areas
- Control and Systems Engineering
- Signal Processing
- Computer Networks and Communications
- Applied Mathematics