Cross-Track Illumination Correction for Hyperspectral Pushbroom Sensor Images Using Low-Rank and Sparse Representations

Lina Zhuang, Michael K. Ng, Yao Liu*

*Corresponding author for this work

Research output: Contribution to journalJournal articlepeer-review

10 Citations (Scopus)


A hyperspectral pushbroom sensor scans objects line-by-line using a detector array, and a cross-track illumination error (CTIE) exists in the imagery acquired in this way. When the illumination of the individual cells of the detector is not aligned well, or if some of the cells are degraded or old, the acquired images will exhibit nonuniform illumination in the cross-track direction. As additive Gaussian noise is found widely in hyperspectral images (HSIs), we develop a unified mathematical model that describes the image formation process corrupted by the CTIE and additive Gaussian noise. The CTIE produced by line-by-line scanning is replicated and modeled as an offset term with the equivalent values in the direction of flight. The main contribution of this study is the development of a hyperspectral image cross-track illumination correction (HyCIC) method, which corrects the cross-track illumination using column (along-track) mean compensation with total variation and sparsity regularizations, and attenuates the Gaussian noise by using a form of low-rank constraint. The effectiveness of the proposed method is illustrated using semireal data and real HSIs. The performance of the proposed HyCIC is found to be better than other existing methods.

Original languageEnglish
Article number5502117
Number of pages17
JournalIEEE Transactions on Geoscience and Remote Sensing
Publication statusPublished - 13 Jan 2023

Scopus Subject Areas

  • Electrical and Electronic Engineering
  • Earth and Planetary Sciences(all)

User-Defined Keywords

  • Hyperspectral denoising
  • radiometric correction
  • smile effect
  • spectral smile correction


Dive into the research topics of 'Cross-Track Illumination Correction for Hyperspectral Pushbroom Sensor Images Using Low-Rank and Sparse Representations'. Together they form a unique fingerprint.

Cite this