Abstract
In order to improve the interpretation of principal components, many sparse principal component analysis (PCA) methods have been proposed by in the form of self-contained regression-type. In this paper, we generalize the steps needed to move from PCA-like methods to its self-contained regression-type, and propose a joint sparse pixel weighted PCA method. More specifically, we generalize a self-contained regression-type framework of graph embedding. Unlike the regression-type of graph embedding relying on the regular low-dimensional data, the self-contained regression-type framework does not rely on the regular low-dimensional data of graph embedding. The learned low-dimensional data in the form of self-contained regression theoretically approximates to the regular low-dimensional data. Under this self-contained regression-type, sparse regularization term can be arbitrarily added, and hence, the learned sparse regression coefficients can interpret the low-dimensional data. By using the joint sparse 2,1-norm regularizer, a sparse self-contained regression-type of pixel weighted PCA can be produced. Experiments on six data sets demonstrate that the proposed method is both feasible and effective.
| Original language | English |
|---|---|
| Article number | 7962190 |
| Pages (from-to) | 2537-2550 |
| Number of pages | 14 |
| Journal | IEEE Transactions on Circuits and Systems for Video Technology |
| Volume | 28 |
| Issue number | 10 |
| DOIs | |
| Publication status | Published - Oct 2018 |
User-Defined Keywords
- self-contained regression-type
- sparse subspace learning
- Weighted PCA