TY - JOUR
T1 - Hyperspectral Image Super-Resolution via Deep Progressive Zero-Centric Residual Learning
AU - Zhu, Zhiyu
AU - Hou, Junhui
AU - Chen, Jie
AU - Zeng, Huanqiang
AU - Zhou, Jiantao
N1 - Funding Information:
Manuscript received June 22, 2020; revised November 6, 2020; accepted December 1, 2020. Date of publication December 17, 2020; date of current version December 29, 2020. This work was supported in part by the Hong Kong Research Grants Council under Grant 9048123 (CityU 21211518) and Grant 9042820 (CityU 11219019) and in part by the Macau Science and Technology Development Fund under Grant 077/2018/A2. The associate editor coordinating the review of this manuscript and approving it for publication was Dr. Jocelyn Chanussot. (Corresponding author: Junhui Hou.) Zhiyu Zhu and Junhui Hou are with the Department of Computer Science, City University of Hong Kong, Hong Kong (e-mail: zhiyuzhu2-c@ my.cityu.edu.hk; [email protected]).
PY - 2021/1
Y1 - 2021/1
N2 - This paper explores the problem of hyperspectral image (HSI) super-resolution that merges a low resolution HSI (LR-HSI) and a high resolution multispectral image (HR-MSI). The cross-modality distribution of the spatial and spectral information makes the problem challenging. Inspired by the classic wavelet decomposition-based image fusion, we propose a novel lightweight deep neural network-based framework, namely progressive zero-centric residual network (PZRes-Net), to address this problem efficiently and effectively. Specifically, PZRes-Net learns a high resolution and zero-centric residual image, which contains high-frequency spatial details of the scene across all spectral bands, from both inputs in a progressive fashion along the spectral dimension. And the resulting residual image is then superimposed onto the up-sampled LR-HSI in a mean-value invariant manner, leading to a coarse HR-HSI, which is further refined by exploring the coherence across all spectral bands simultaneously. To learn the residual image efficiently and effectively, we employ spectral-spatial separable convolution with dense connections. In addition, we propose zero-mean normalization implemented on the feature maps of each layer to realize the zero-mean characteristic of the residual image. Extensive experiments over both real and synthetic benchmark datasets demonstrate that our PZRes-Net outperforms state-of-the-art methods to a significant extent in terms of both 4 quantitative metrics and visual quality, e.g., our PZRes-Net improves the PSNR more than 3dB, while saving 2.3\times parameters and consuming 15\times less FLOPs. The code is publicly available at https://github.com/zbzhzhy/PZRes-Net
AB - This paper explores the problem of hyperspectral image (HSI) super-resolution that merges a low resolution HSI (LR-HSI) and a high resolution multispectral image (HR-MSI). The cross-modality distribution of the spatial and spectral information makes the problem challenging. Inspired by the classic wavelet decomposition-based image fusion, we propose a novel lightweight deep neural network-based framework, namely progressive zero-centric residual network (PZRes-Net), to address this problem efficiently and effectively. Specifically, PZRes-Net learns a high resolution and zero-centric residual image, which contains high-frequency spatial details of the scene across all spectral bands, from both inputs in a progressive fashion along the spectral dimension. And the resulting residual image is then superimposed onto the up-sampled LR-HSI in a mean-value invariant manner, leading to a coarse HR-HSI, which is further refined by exploring the coherence across all spectral bands simultaneously. To learn the residual image efficiently and effectively, we employ spectral-spatial separable convolution with dense connections. In addition, we propose zero-mean normalization implemented on the feature maps of each layer to realize the zero-mean characteristic of the residual image. Extensive experiments over both real and synthetic benchmark datasets demonstrate that our PZRes-Net outperforms state-of-the-art methods to a significant extent in terms of both 4 quantitative metrics and visual quality, e.g., our PZRes-Net improves the PSNR more than 3dB, while saving 2.3\times parameters and consuming 15\times less FLOPs. The code is publicly available at https://github.com/zbzhzhy/PZRes-Net
KW - cross-modality
KW - deep learning
KW - Hyperspectral imagery
KW - image fusion
KW - super-resolution
KW - zero-mean normalization
UR - http://www.scopus.com/inward/record.url?scp=85098752118&partnerID=8YFLogxK
U2 - 10.1109/TIP.2020.3044214
DO - 10.1109/TIP.2020.3044214
M3 - Journal article
C2 - 33332269
AN - SCOPUS:85098752118
SN - 1057-7149
VL - 30
SP - 1423
EP - 1438
JO - IEEE Transactions on Image Processing
JF - IEEE Transactions on Image Processing
ER -