Comparison theorems on large-margin learning

Amina Benabid, Jun FAN, Dao Hong Xiang*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

This paper studies the binary classification problem associated with a family of Lipschitz convex loss functions called large-margin unified machines (LUMs), which offers a natural bridge between distribution-based likelihood approaches and margin-based approaches. LUMs can overcome the so-called data piling issue of support vector machine in the high-dimension and low-sample size setting, while their theoretical analysis from the perspective of learning theory is still lacking. In this paper, we establish some new comparison theorems for all LUM loss functions which play a key role in the error analysis of large-margin learning algorithms. Based on the obtained comparison theorems, we further derive learning rates for regularized LUMs schemes associated with varying Gaussian kernels, which maybe of independent interest.

Original languageEnglish
Article number2150015
JournalInternational Journal of Wavelets, Multiresolution and Information Processing
DOIs
Publication statusAccepted/In press - 2021

Scopus Subject Areas

  • Signal Processing
  • Information Systems
  • Applied Mathematics

User-Defined Keywords

  • comparison theorem
  • Gaussian kernels
  • generalization error
  • Large-margin unified machine
  • misclassification error

Fingerprint

Dive into the research topics of 'Comparison theorems on large-margin learning'. Together they form a unique fingerprint.

Cite this