TY - GEN
T1 - Scalable spectral k-support norm regularization for robust low rank subspace learning
AU - CHEUNG, Yiu Ming
AU - Lou, Jian
N1 - Funding Information:
This work was supported by the Faculty Research Grant of Hong Kong Baptist University under Project: FRG2/15-16/049, and National Natural Science Foundation of China under Grants: 61272366 and 61672444.
PY - 2016/10/24
Y1 - 2016/10/24
N2 - As a fundamental tool in the fields of data mining and computer vision, robust low rank subspace learning is to recover a low rank matrix under gross corruptions that are often modeled by another sparse matrix. Within this learning, we investigate the spectral k-support norm, a more appealing convex relaxation than the popular nuclear norm, as a low rank penalty in this paper. Despite the better recovering performance, the spectral k-support norm entails the model difficult to be optimized efficiently, which severely limits its scalability from the practical perspective. Therefore, this paper proposes a scalable and efficient algorithm which considers the dual objective of the original problem that can take advantage of the more computational efficient linear oracle of the spectral k-support norm to be evaluated. Further, by studying the sub-gradient of the loss of the dual objective, a line-search strategy is adopted in the algorithm to enable it to adapt to the Holder smoothness. Experiments on various tasks demonstrate the superior prediction performance and computation efficiency of the proposed algorithm.
AB - As a fundamental tool in the fields of data mining and computer vision, robust low rank subspace learning is to recover a low rank matrix under gross corruptions that are often modeled by another sparse matrix. Within this learning, we investigate the spectral k-support norm, a more appealing convex relaxation than the popular nuclear norm, as a low rank penalty in this paper. Despite the better recovering performance, the spectral k-support norm entails the model difficult to be optimized efficiently, which severely limits its scalability from the practical perspective. Therefore, this paper proposes a scalable and efficient algorithm which considers the dual objective of the original problem that can take advantage of the more computational efficient linear oracle of the spectral k-support norm to be evaluated. Further, by studying the sub-gradient of the loss of the dual objective, a line-search strategy is adopted in the algorithm to enable it to adapt to the Holder smoothness. Experiments on various tasks demonstrate the superior prediction performance and computation efficiency of the proposed algorithm.
KW - Conditional gradient
KW - Robust low rank subspace learning
KW - Spectral k-support norm
UR - http://www.scopus.com/inward/record.url?scp=84996523325&partnerID=8YFLogxK
U2 - 10.1145/2983323.2983738
DO - 10.1145/2983323.2983738
M3 - Conference proceeding
AN - SCOPUS:84996523325
T3 - International Conference on Information and Knowledge Management, Proceedings
SP - 1151
EP - 1160
BT - CIKM 2016 - Proceedings of the 2016 ACM Conference on Information and Knowledge Management
PB - Association for Computing Machinery (ACM)
T2 - 25th ACM International Conference on Information and Knowledge Management, CIKM 2016
Y2 - 24 October 2016 through 28 October 2016
ER -