Joint sparse principal component analysis

Shuangyan Yi, Zhihui Lai, Zhenyu He*, Yiu Ming CHEUNG, Yang LIU

*Corresponding author for this work

Research output: Contribution to journalJournal articlepeer-review

181 Citations (Scopus)


Principal component analysis (PCA) is widely used in dimensionality reduction. A lot of variants of PCA have been proposed to improve the robustness of the algorithm. However, the existing methods either cannot select the useful features consistently or is still sensitive to outliers, which will depress their performance of classification accuracy. In this paper, a novel approach called joint sparse principal component analysis (JSPCA) is proposed to jointly select useful features and enhance robustness to outliers. In detail, JSPCA relaxes the orthogonal constraint of transformation matrix to make it have more freedom to jointly select useful features for low-dimensional representation. JSPCA imposes joint sparse constraints on its objective function, i.e., ℓ2,1-norm is imposed on both the loss term and the regularization term, to improve the algorithmic robustness. A simple yet effective optimization solution is presented and the theoretical analyses of JSPCA are provided. The experimental results on eight data sets demonstrate that the proposed approach is feasible and effective.

Original languageEnglish
Pages (from-to)524-536
Number of pages13
JournalPattern Recognition
Publication statusPublished - 1 Jan 2017

Scopus Subject Areas

  • Software
  • Signal Processing
  • Computer Vision and Pattern Recognition
  • Artificial Intelligence

User-Defined Keywords

  • Dimensionality reduction
  • Joint sparse
  • ℓ-norm


Dive into the research topics of 'Joint sparse principal component analysis'. Together they form a unique fingerprint.

Cite this