LNNet: Lightweight Nested Network for motion deblurring

Cai Guo, Qian Wang, Hong Ning Dai*, Hao Wang, Ping Li

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Motion deblurring methods based on convolutional neural networks (CNN) have recently demonstrated their advantages over conventional methods. However, repetitions of scaling or slicing operations of these methods on the input images inevitably lead to spatial information loss. Meanwhile, some recent methods based on complex models inevitably bring a large model size and huge computing cost. It is still challenging to balance the deblurring performance and the cost. To this end, we propose a lightweight nested network (LNNet) for the motion-deblurring task. Our LNNet leverages several simple yet efficient sub-networks to process motion deblurring features at each stage. We design a nested connection, which is conducive to the model size reduction when connecting sub-networks so as to reuse deblurring information and facilitate deblurring information diversity. Meanwhile, we introduce the feature-fusion module to improve deblurring performance further. We perform extensive experiments on a workstation platform and an embedded mobile edge computing (MEC) platform to evaluate our LNNet as well as other existing methods. Extensive experimental results demonstrate that our LNNet achieves superior deblurring performance than state-of-the-art methods with a small model size within a short running time. Moreover, experimental results also show that our model is quite suitable for other embedded devices.

Original languageEnglish
Article number102584
JournalJournal of Systems Architecture
Volume129
DOIs
Publication statusPublished - Aug 2022

Scopus Subject Areas

  • Software
  • Hardware and Architecture

User-Defined Keywords

  • Lightweight AI model
  • Mobile edge computing
  • Motion deblurring
  • Nested neural network

Fingerprint

Dive into the research topics of 'LNNet: Lightweight Nested Network for motion deblurring'. Together they form a unique fingerprint.

Cite this