Abstract
This paper studies the binary classification problem associated with a family of Lipschitz convex loss functions called large-margin unified machines (LUMs), which offers a natural bridge between distribution-based likelihood approaches and margin-based approaches. LUMs can overcome the so-called data piling issue of support vector machine in the high-dimension and low-sample size setting, while their theoretical analysis from the perspective of learning theory is still lacking. In this paper, we establish some new comparison theorems for all LUM loss functions which play a key role in the error analysis of large-margin learning algorithms. Based on the obtained comparison theorems, we further derive learning rates for regularized LUMs schemes associated with varying Gaussian kernels, which maybe of independent interest.
Original language | English |
---|---|
Article number | 2150015 |
Journal | International Journal of Wavelets, Multiresolution and Information Processing |
Volume | 19 |
Issue number | 5 |
Early online date | 20 Apr 2021 |
DOIs | |
Publication status | Published - Sept 2021 |
Scopus Subject Areas
- Signal Processing
- Information Systems
- Applied Mathematics
User-Defined Keywords
- comparison theorem
- Gaussian kernels
- generalization error
- Large-margin unified machine
- misclassification error