Recognition of head-&-shoulder face image using virtual frontal-view image

G. C. Feng*, Pong Chi YUEN

*Corresponding author for this work

Research output: Contribution to journalJournal articlepeer-review

32 Citations (Scopus)


This paper addresses the problem of face recognition under varying poses. To recognize a face under different poses, one approach is to use a human face three-dimensional (3-D) model. This approach is flexible but the equipment for acquiring the 3-D face image is very expensive. The second approach is view-based. However, the complexity of the system is very high, as it requires constructing a representation for each view. For a 3-D rotation, construction of dozens of representations may be required. This paper proposes a new idea to transform the face with unknown pose into frontal-view for recognition. To construct the virtual frontal view image, we have developed an algorithm for detecting facial landmarks, which are then used to estimate the orientation of the face. A generic 3-D spring-based face model is developed to transform the unknown face image into virtual frontal-view image. Finally, a spectroface method, which is based on wavelet transform and Fourier transform, is developed to recognize the virtual frontal face image. The proposed method has been tested by 1145 face images from 85 persons with different poses, facial expressions and small occlusions. The recognition accuracy for the best match is 84.7%. If we consider the top three matches, the accuracy increases to 92.9%.

Original languageEnglish
Pages (from-to)871-883
Number of pages13
JournalIEEE Transactions on Systems, Man, and Cybernetics Part A:Systems and Humans
Issue number6
Publication statusPublished - Nov 2000

Scopus Subject Areas

  • Software
  • Control and Systems Engineering
  • Human-Computer Interaction
  • Computer Science Applications
  • Electrical and Electronic Engineering


Dive into the research topics of 'Recognition of head-&-shoulder face image using virtual frontal-view image'. Together they form a unique fingerprint.

Cite this