TY - JOUR
T1 - Multi-stage feature-fusion dense network for motion deblurring
AU - Guo, Cai
AU - Wang, Qian
AU - Dai, Hong Ning
AU - Li, Ping
N1 - Funding Information:
The work described in this paper was partially supported by Science and Technology Planning Project of Guangdong Province of China (GDKTP202004920, 2022A1515011551), Natural Science Foundation of Guangdong Province of China (2018A0303070009, 2021A1515011091), Project of Educational Commission of Guangdong Province of China (2018KTSCX143, 2020ZDZX3056, 2021KTSCX07, 2021KQNCX051), Special Basic Cooperative Research Programs of Yunnan Provincial Undergraduate Universities’ Association (No. 202101BA070001-045) and the Hong Kong Polytechnic University under Grant P0030419, Grant P0042740, and Grant P0035358.
Publisher Copyright:
© 2022 Elsevier Inc.
PY - 2023/2
Y1 - 2023/2
N2 - Although convolutional neural networks (CNNs) have recently shown considerable progress in motion deblurring, most existing methods that adopt multi-scale input schemes are still challenging in accurately restoring the heavily-blurred regions in blurry images. Several recent methods aim to further improve the deblurring effect using larger and more complex models, but these methods inevitably result in huge computing costs. To address the performance-complexity trade-off, we propose a multi-stage feature-fusion dense network (MFFDNet) for motion deblurring. Each sub-network of our MFFDNet has the similar structure and the same scale of input. Meanwhile, we propose a feature-fusion dense connection structure to reuse the extracted features, thereby improving the deblurring effect. Moreover, instead of using the multi-scale loss function, we only calculate the loss function at the output of the last stage since the input scale of our sub-network is invariant. Experimental results show that MFFDNet maintains a relatively small computing cost while outperforming state-of-the-art motion-deblurring methods. The source code is publicly available at: https://github.com/CaiGuoHS/MFFDNet_release.
AB - Although convolutional neural networks (CNNs) have recently shown considerable progress in motion deblurring, most existing methods that adopt multi-scale input schemes are still challenging in accurately restoring the heavily-blurred regions in blurry images. Several recent methods aim to further improve the deblurring effect using larger and more complex models, but these methods inevitably result in huge computing costs. To address the performance-complexity trade-off, we propose a multi-stage feature-fusion dense network (MFFDNet) for motion deblurring. Each sub-network of our MFFDNet has the similar structure and the same scale of input. Meanwhile, we propose a feature-fusion dense connection structure to reuse the extracted features, thereby improving the deblurring effect. Moreover, instead of using the multi-scale loss function, we only calculate the loss function at the output of the last stage since the input scale of our sub-network is invariant. Experimental results show that MFFDNet maintains a relatively small computing cost while outperforming state-of-the-art motion-deblurring methods. The source code is publicly available at: https://github.com/CaiGuoHS/MFFDNet_release.
KW - Channel-based multi-layer perceptrons
KW - Feature-fusion dense connections
KW - Motion deblurring
KW - Multi-stage network
UR - http://www.scopus.com/inward/record.url?scp=85144015716&partnerID=8YFLogxK
U2 - 10.1016/j.jvcir.2022.103717
DO - 10.1016/j.jvcir.2022.103717
M3 - Journal article
SN - 1047-3203
VL - 90
JO - Journal of Visual Communication and Image Representation
JF - Journal of Visual Communication and Image Representation
M1 - 103717
ER -