TY - JOUR
T1 - Unified Sparse Subspace Learning via Self-Contained Regression
AU - Yi, Shuangyan
AU - He, Zhenyu
AU - Cheung, Yiu Ming
AU - Chen, Wen Sheng
N1 - Publisher Copyright:
© 1991-2012 IEEE.
PY - 2018/10
Y1 - 2018/10
N2 - In order to improve the interpretation of principal components, many sparse principal component analysis (PCA) methods have been proposed by in the form of self-contained regression-type. In this paper, we generalize the steps needed to move from PCA-like methods to its self-contained regression-type, and propose a joint sparse pixel weighted PCA method. More specifically, we generalize a self-contained regression-type framework of graph embedding. Unlike the regression-type of graph embedding relying on the regular low-dimensional data, the self-contained regression-type framework does not rely on the regular low-dimensional data of graph embedding. The learned low-dimensional data in the form of self-contained regression theoretically approximates to the regular low-dimensional data. Under this self-contained regression-type, sparse regularization term can be arbitrarily added, and hence, the learned sparse regression coefficients can interpret the low-dimensional data. By using the joint sparse 2,1-norm regularizer, a sparse self-contained regression-type of pixel weighted PCA can be produced. Experiments on six data sets demonstrate that the proposed method is both feasible and effective.
AB - In order to improve the interpretation of principal components, many sparse principal component analysis (PCA) methods have been proposed by in the form of self-contained regression-type. In this paper, we generalize the steps needed to move from PCA-like methods to its self-contained regression-type, and propose a joint sparse pixel weighted PCA method. More specifically, we generalize a self-contained regression-type framework of graph embedding. Unlike the regression-type of graph embedding relying on the regular low-dimensional data, the self-contained regression-type framework does not rely on the regular low-dimensional data of graph embedding. The learned low-dimensional data in the form of self-contained regression theoretically approximates to the regular low-dimensional data. Under this self-contained regression-type, sparse regularization term can be arbitrarily added, and hence, the learned sparse regression coefficients can interpret the low-dimensional data. By using the joint sparse 2,1-norm regularizer, a sparse self-contained regression-type of pixel weighted PCA can be produced. Experiments on six data sets demonstrate that the proposed method is both feasible and effective.
KW - self-contained regression-type
KW - sparse subspace learning
KW - Weighted PCA
UR - http://www.scopus.com/inward/record.url?scp=85022026314&partnerID=8YFLogxK
U2 - 10.1109/TCSVT.2017.2721541
DO - 10.1109/TCSVT.2017.2721541
M3 - Journal article
AN - SCOPUS:85022026314
SN - 1051-8215
VL - 28
SP - 2537
EP - 2550
JO - IEEE Transactions on Circuits and Systems for Video Technology
JF - IEEE Transactions on Circuits and Systems for Video Technology
IS - 10
M1 - 7962190
ER -