Unsupervised learning of multi-task deep variational model

Lu Tan, Ling Li, Wan Quan Liu*, Sen Jian An, Kylie Munyard

*Corresponding author for this work

Research output: Contribution to journalJournal articlepeer-review

Abstract

We propose a general deep variational model (reduced version, full version as well as the extension) via a comprehensive fusion approach in this paper. It is able to realize various image tasks in a completely unsupervised way without learning from samples. Technically, it can properly incorporate the CNN based deep image prior (DIP) architecture into the classic variational image processing models. The minimization problem solving strategy is transformed from iteratively minimizing the sub-problem for each variable to automatically minimizing the loss function by learning the generator network parameters. The proposed deep variational (DV) model contributes to the high order image edition and applications such as image restoration, inpainting, decomposition and texture segmentation. Experiments conducted have demonstrated significant advantages of the proposed deep variational model in comparison with several powerful techniques including variational methods and deep learning approaches.

Original languageEnglish
Article number103588
Number of pages13
JournalJournal of Visual Communication and Image Representation
Volume87
DOIs
Publication statusPublished - Aug 2022

User-Defined Keywords

  • Deep neural networks
  • Diverse applications
  • Integration approach
  • Unsupervised learning
  • Variational general frameworks

Fingerprint

Dive into the research topics of 'Unsupervised learning of multi-task deep variational model'. Together they form a unique fingerprint.

Cite this