A Semi-supervised SVM for Manifold Learning

Zhili Wu*, Chun Hung Li, Ji Zhu, Jian Huang

*Corresponding author for this work

Research output: Chapter in book/report/conference proceedingConference proceedingpeer-review

6 Citations (Scopus)


Many classification tasks benefit from integrating manifold learning with semi-supervised learning. By formulating the learning task in a semi-supervised manner, we propose a novel objective function that combines the manifold consistency of whole dataset with the hinge loss of class label prediction. This formulation results in a SVM-alike task operating on the kernel derived from the graph Laplacian, and is capable of capturing the intrinsic manifold structure of the whole dataset and maximizing the margin separating labelled examples. Results on face and handwritten digit recognition tasks show significant performance gain. The performance gain is particularly impressive when only a small training set is available, which is often the true scenario of many real-world problems.

Original languageEnglish
Title of host publicationProceedings of the 18th International Conference on Pattern Recognition, ICPR 2006
Place of PublicationUnited States
Number of pages4
ISBN (Print)0769525210, 9780769525211
Publication statusPublished - Aug 2006
EventThe 18th International Conference on Pattern Recognition, ICPR 2006 - Hong Kong Convention and Exhibition Center, Hong Kong
Duration: 20 Aug 200624 Aug 2006
https://www.comp.hkbu.edu.hk/~icpr06/index.php (Link to conference website)
https://ieeexplore.ieee.org/xpl/conhome/11159/proceeding (Link to conference proceedings)

Publication series

NameProceedings - International Conference on Pattern Recognition
ISSN (Print)1051-4651


ConferenceThe 18th International Conference on Pattern Recognition, ICPR 2006
Country/TerritoryHong Kong
Internet address

Scopus Subject Areas

  • Computer Vision and Pattern Recognition


Dive into the research topics of 'A Semi-supervised SVM for Manifold Learning'. Together they form a unique fingerprint.

Cite this