In this paper, we study the problem of face identification from only one training sample per person (OSPP). For a face identification system, the most critical obstacles towards real-world applications are often caused by the disguised, corrupted and varying illuminated images in limited sample sets. Meanwhile, storing fewer training samples would essentially reduce the cost for collecting, storing and processing data. Unfortunately, most methods in the literature basically need large training sets for good representation and generation abilities and would fail if there is only one training sample per person. In this paper, we propose a two-step scheme for the OSPP problem by posing it as a representation and matching problem. For the representation step, we present a novel manifold embedding algorithm, namely sparse discriminative multi-manifold embedding (SDMME), to learn the intrinsic representation beneath the raw data. We construct two sparse graphs to measure the sample similarity, based on two structured dictionaries. Multiple feature spaces are learned to simultaneously minimize the bias from the subspace of the same class and maximize the distances to the subspaces of other classes. For the matching step, we use a distance metric based on the manifold structure to identify the person. Extensive experiments demonstrate that the proposed method outperforms other state-of-the-art methods for the problem of one-sample face identification, while the robustness with occlusion and illumination variances highlights the contribution of our work.
Scopus Subject Areas
- Signal Processing
- Computer Vision and Pattern Recognition
- Artificial Intelligence
- Face recognition
- Manifold embedding
- Structured sparse representation