Abstract
In this paper, we consider the robust tensor completion problem for recovering a low-rank tensor from limited samples and sparsely corrupted observations, especially by impulse noise. A convex relaxation of this problem is to minimize a weighted combination of tubal nuclear norm and the ℓ1-norm data fidelity term. However, the ℓ1-norm may yield biased estimators and fail to achieve the best estimation performance. To overcome this disadvantage, we propose and develop a nonconvex model, which minimizes a weighted combination of tubal nuclear norm, the ℓ1-norm data fidelity term, and a concave smooth correction term. Further, we present a Gauss–Seidel difference of convex functions algorithm (GS-DCA) to solve the resulting optimization model by using a linearization technique. We prove that the iteration sequence generated by GS-DCA converges to the critical point of the proposed model. Furthermore, we propose an extrapolation technique of GS-DCA to improve the performance of the GS-DCA. Numerical experiments for color images, hyperspectral images, magnetic resonance imaging images and videos demonstrate that the effectiveness of the proposed method.
| Original language | English |
|---|---|
| Article number | 46 |
| Number of pages | 32 |
| Journal | Journal of Scientific Computing |
| Volume | 85 |
| DOIs | |
| Publication status | Published - 5 Nov 2020 |
User-Defined Keywords
- Difference of convex functions
- Impulse noises
- Low-rank
- Robust tensor completion
- Sparsity
Fingerprint
Dive into the research topics of 'Nonconvex Optimization for Robust Tensor Completion from Grossly Sparse Observations'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver