Image super-resolution by textural context constrained visual vocabulary

Yan Liang, Pong Chi YUEN*, Jian Huang Lai

*Corresponding author for this work

Research output: Contribution to journalJournal articlepeer-review

6 Citations (Scopus)


Example-based super-resolution (SR) approach hallucinates the missing high-resolution (HR) details by learning the example image patches. This approach implicitly assumes that the similarity of the low-resolution (LR) patches can infer the similarity of the corresponding HR patches. However, this similarity preserving assumption may not be held in practice. Thus the example-based super-resolved image inevitably contains artifacts not close to the ground truth. In this paper, we propose a novel single-image SR method by integrating an enforced similarity preserving process by using visual vocabulary into example-based SR approach. By jointly learning the HR and LR visual vocabularies, we can obtain a geometric co-occurrence prior to make the geometric similarity preserved within each visual word. We further propose a two-step framework for SR. The first step estimates the optimum visual word using textural context cue while the second step enforces the visual word subspace constraint and reconstruction constraint for estimating the final result. Experiments demonstrate the effectiveness of our method for recovering the missing HR details, especially texture.

Original languageEnglish
Pages (from-to)1096-1108
Number of pages13
JournalSignal Processing: Image Communication
Issue number10
Publication statusPublished - Nov 2012

Scopus Subject Areas

  • Software
  • Signal Processing
  • Computer Vision and Pattern Recognition
  • Electrical and Electronic Engineering

User-Defined Keywords

  • Similarity preserving
  • Super-resolution
  • Textural context
  • Visual vocabulary


Dive into the research topics of 'Image super-resolution by textural context constrained visual vocabulary'. Together they form a unique fingerprint.

Cite this