TY - JOUR
T1 - Parallel Active Subspace Decomposition for Tensor Robust Principal Component Analysis
AU - Ng, Michael K.
AU - Wang, Xue Zhong
N1 - Publisher Copyright:
© 2020, Shanghai University.
PY - 2021/6
Y1 - 2021/6
N2 - Tensor robust principal component analysis has received a substantial amount of attention in various fields. Most existing methods, normally relying on tensor nuclear norm minimization, need to pay an expensive computational cost due to multiple singular value decompositions at each iteration. To overcome the drawback, we propose a scalable and efficient method, named parallel active subspace decomposition, which divides the unfolding along each mode of the tensor into a columnwise orthonormal matrix (active subspace) and another small-size matrix in parallel. Such a transformation leads to a nonconvex optimization problem in which the scale of nuclear norm minimization is generally much smaller than that in the original problem. We solve the optimization problem by an alternating direction method of multipliers and show that the iterates can be convergent within the given stopping criterion and the convergent solution is close to the global optimum solution within the prescribed bound. Experimental results are given to demonstrate that the performance of the proposed model is better than the state-of-the-art methods.
AB - Tensor robust principal component analysis has received a substantial amount of attention in various fields. Most existing methods, normally relying on tensor nuclear norm minimization, need to pay an expensive computational cost due to multiple singular value decompositions at each iteration. To overcome the drawback, we propose a scalable and efficient method, named parallel active subspace decomposition, which divides the unfolding along each mode of the tensor into a columnwise orthonormal matrix (active subspace) and another small-size matrix in parallel. Such a transformation leads to a nonconvex optimization problem in which the scale of nuclear norm minimization is generally much smaller than that in the original problem. We solve the optimization problem by an alternating direction method of multipliers and show that the iterates can be convergent within the given stopping criterion and the convergent solution is close to the global optimum solution within the prescribed bound. Experimental results are given to demonstrate that the performance of the proposed model is better than the state-of-the-art methods.
KW - 65F22
KW - 65F30
KW - Active subspace decomposition
KW - Low-rank tensors
KW - Matrix factorization
KW - Nuclear norm minimization
KW - Principal component analysis
UR - http://www.scopus.com/inward/record.url?scp=85132244381&partnerID=8YFLogxK
UR - https://link.springer.com/article/10.1007/s42967-020-00063-9
U2 - 10.1007/s42967-020-00063-9
DO - 10.1007/s42967-020-00063-9
M3 - Journal article
AN - SCOPUS:85132244381
SN - 2096-6385
VL - 3
SP - 221
EP - 241
JO - Communications on Applied Mathematics and Computation
JF - Communications on Applied Mathematics and Computation
IS - 2
ER -