Unified Sparse Subspace Learning via Self-Contained Regression

Shuangyan Yi, Zhenyu He*, Yiu Ming CHEUNG, Wen Sheng Chen

*Corresponding author for this work

Research output: Contribution to journalJournal articlepeer-review

27 Citations (Scopus)


In order to improve the interpretation of principal components, many sparse principal component analysis (PCA) methods have been proposed by in the form of self-contained regression-type. In this paper, we generalize the steps needed to move from PCA-like methods to its self-contained regression-type, and propose a joint sparse pixel weighted PCA method. More specifically, we generalize a self-contained regression-type framework of graph embedding. Unlike the regression-type of graph embedding relying on the regular low-dimensional data, the self-contained regression-type framework does not rely on the regular low-dimensional data of graph embedding. The learned low-dimensional data in the form of self-contained regression theoretically approximates to the regular low-dimensional data. Under this self-contained regression-type, sparse regularization term can be arbitrarily added, and hence, the learned sparse regression coefficients can interpret the low-dimensional data. By using the joint sparse 2,1-norm regularizer, a sparse self-contained regression-type of pixel weighted PCA can be produced. Experiments on six data sets demonstrate that the proposed method is both feasible and effective.

Original languageEnglish
Article number7962190
Pages (from-to)2537-2550
Number of pages14
JournalIEEE Transactions on Circuits and Systems for Video Technology
Issue number10
Publication statusPublished - Oct 2018

Scopus Subject Areas

  • Media Technology
  • Electrical and Electronic Engineering

User-Defined Keywords

  • self-contained regression-type
  • sparse subspace learning
  • Weighted PCA


Dive into the research topics of 'Unified Sparse Subspace Learning via Self-Contained Regression'. Together they form a unique fingerprint.

Cite this