Compressive total variation for image reconstruction and restoration

Peng Li*, Wengu Chen, Michael K. Ng

*Corresponding author for this work

Research output: Contribution to journalJournal articlepeer-review

16 Citations (Scopus)

Abstract

In this paper, we make use of the fact that the matrix u is (approximately) low-rank in image inpainting, and the corresponding gradient transform matrices Dxu,Dyu are sparse in image reconstruction and restoration. Therefore we consider that these gradient matrices Dxu,Dyu also are (approximately) low-rank, and also verify it by numerical test and theoretical analysis. We propose a model called compressive total variation (CTV) to characterize the sparsity and low-rank prior knowledge of an image. In order to solve the proposed model, we design a concrete algorithm with provably convergence, which is based on inertial proximal ADMM. The performance of the proposed model is tested for magnetic resonance imaging (MRI) reconstruction, image denoising and image deblurring. The proposed method not only recovers edges of the image but also preserves fine details of the image. And our model is much better than the existing regularization models based on the TGV, Shearlet-TGV, ℓ1−αℓ2TV and BM3D in test for images with piecewise constant regions. And it visibly improves the performances of TV, ℓ1−αℓ2TV and TGV, and is comparable to Shearlet-TGV in test for natural images.

Original languageEnglish
Pages (from-to)874-893
Number of pages20
JournalComputers and Mathematics with Applications
Volume80
Issue number5
DOIs
Publication statusPublished - 1 Sept 2020

Scopus Subject Areas

  • Modelling and Simulation
  • Computational Theory and Mathematics
  • Computational Mathematics

User-Defined Keywords

  • Compressive total variation
  • Image deblurring
  • Image denoising
  • Low-rank
  • MRI reconstruction
  • Nuclear norm total (generalized) variation

Fingerprint

Dive into the research topics of 'Compressive total variation for image reconstruction and restoration'. Together they form a unique fingerprint.

Cite this