Perturbation LDA: Learning the difference between the class empirical mean and its expectation

Wei Shi Zheng, J. H. Lai*, Pong Chi YUEN, Stan Z. Li

*Corresponding author for this work

Research output: Contribution to journalJournal articlepeer-review

26 Citations (Scopus)

Abstract

Fisher's linear discriminant analysis (LDA) is popular for dimension reduction and extraction of discriminant features in many pattern recognition applications, especially biometric learning. In deriving the Fisher's LDA formulation, there is an assumption that the class empirical mean is equal to its expectation. However, this assumption may not be valid in practice. In this paper, from the "perturbation" perspective, we develop a new algorithm, called perturbation LDA (P-LDA), in which perturbation random vectors are introduced to learn the effect of the difference between the class empirical mean and its expectation in Fisher criterion. This perturbation learning in Fisher criterion would yield new forms of within-class and between-class covariance matrices integrated with some perturbation factors. Moreover, a method is proposed for estimation of the covariance matrices of perturbation random vectors for practical implementation. The proposed P-LDA is evaluated on both synthetic data sets and real face image data sets. Experimental results show that P-LDA outperforms the popular Fisher's LDA-based algorithms in the undersampled case.

Original languageEnglish
Pages (from-to)764-779
Number of pages16
JournalPattern Recognition
Volume42
Issue number5
DOIs
Publication statusPublished - May 2009

Scopus Subject Areas

  • Software
  • Signal Processing
  • Computer Vision and Pattern Recognition
  • Artificial Intelligence

User-Defined Keywords

  • Face recognition
  • Fisher criterion
  • Perturbation analysis

Fingerprint

Dive into the research topics of 'Perturbation LDA: Learning the difference between the class empirical mean and its expectation'. Together they form a unique fingerprint.

Cite this