TY - JOUR
T1 - Nonconvex Optimization for Robust Tensor Completion from Grossly Sparse Observations
AU - Zhao, Xueying
AU - Bai, Minru
AU - Ng, Michael K.
N1 - Minru Bai: Research supported in part by the National Natural Science Foundation of China under Grant 11971159.
Michael K. Ng: Research supported in part by the HKRGC GRF 12306616, 12200317, 12300218, 12300519, 17201020, and HKU Grant 104005583.
Publisher Copyright:
© 2020, Springer Science+Business Media, LLC, part of Springer Nature.
PY - 2020/11/5
Y1 - 2020/11/5
N2 - In this paper, we consider the robust tensor completion problem for recovering a low-rank tensor from limited samples and sparsely corrupted observations, especially by impulse noise. A convex relaxation of this problem is to minimize a weighted combination of tubal nuclear norm and the ℓ1-norm data fidelity term. However, the ℓ1-norm may yield biased estimators and fail to achieve the best estimation performance. To overcome this disadvantage, we propose and develop a nonconvex model, which minimizes a weighted combination of tubal nuclear norm, the ℓ1-norm data fidelity term, and a concave smooth correction term. Further, we present a Gauss–Seidel difference of convex functions algorithm (GS-DCA) to solve the resulting optimization model by using a linearization technique. We prove that the iteration sequence generated by GS-DCA converges to the critical point of the proposed model. Furthermore, we propose an extrapolation technique of GS-DCA to improve the performance of the GS-DCA. Numerical experiments for color images, hyperspectral images, magnetic resonance imaging images and videos demonstrate that the effectiveness of the proposed method.
AB - In this paper, we consider the robust tensor completion problem for recovering a low-rank tensor from limited samples and sparsely corrupted observations, especially by impulse noise. A convex relaxation of this problem is to minimize a weighted combination of tubal nuclear norm and the ℓ1-norm data fidelity term. However, the ℓ1-norm may yield biased estimators and fail to achieve the best estimation performance. To overcome this disadvantage, we propose and develop a nonconvex model, which minimizes a weighted combination of tubal nuclear norm, the ℓ1-norm data fidelity term, and a concave smooth correction term. Further, we present a Gauss–Seidel difference of convex functions algorithm (GS-DCA) to solve the resulting optimization model by using a linearization technique. We prove that the iteration sequence generated by GS-DCA converges to the critical point of the proposed model. Furthermore, we propose an extrapolation technique of GS-DCA to improve the performance of the GS-DCA. Numerical experiments for color images, hyperspectral images, magnetic resonance imaging images and videos demonstrate that the effectiveness of the proposed method.
KW - Difference of convex functions
KW - Impulse noises
KW - Low-rank
KW - Robust tensor completion
KW - Sparsity
UR - http://www.scopus.com/inward/record.url?scp=85095129395&partnerID=8YFLogxK
U2 - 10.1007/s10915-020-01356-0
DO - 10.1007/s10915-020-01356-0
M3 - Journal article
AN - SCOPUS:85095129395
SN - 0885-7474
VL - 85
JO - Journal of Scientific Computing
JF - Journal of Scientific Computing
M1 - 46
ER -