Abstract
Kernel support vector machines (SVMs) deliver state-of-the-art results in many real-world nonlinear classification problems, but the computational cost can be quite demanding in order to maintain a large number of support vectors. Linear SVM, on the other hand, is highly scalable to large data but only suited for linearly separable problems. In this paper, we propose a novel approach called low-rank linearized SVM to scale up kernel SVM on limited resources. Our approach transforms a nonlinear SVM to a linear one via an approximate empirical kernel map computed from efficient kernel low-rank decompositions. We theoretically analyze the gap between the solutions of the approximate and optimal rank- k kernel map, which in turn provides guidance on the sampling scheme of the Nyström approximation. Furthermore, we extend it to a semisupervised metric learning scenario in which partially labeled samples can be exploited to further improve the quality of the low-rank embedding. Our approach inherits rich representability of kernel SVM and high efficiency of linear SVM. Experimental results demonstrate that our approach is more robust and achieves a better tradeoff between model representability and scalability against state-of-the-art algorithms for large-scale SVMs.
Original language | English |
---|---|
Pages (from-to) | 369-378 |
Number of pages | 10 |
Journal | IEEE Transactions on Neural Networks and Learning Systems |
Volume | 30 |
Issue number | 2 |
Early online date | 21 Jun 2018 |
DOIs | |
Publication status | Published - Feb 2019 |
User-Defined Keywords
- Large-scale learning
- low-rank approximation
- metric learning
- support vector machine (SVM)