Self-Fusion Convolutional Neural Networks

Shenjian GONG, Shanshan ZHANG*, Jian YANG, Pong Chi YUEN

*Corresponding author for this work

Research output: Contribution to journalJournal articlepeer-review

4 Citations (Scopus)

Abstract

Efficiency is an important concern for practical applications, therefore, it is of great importance to build effective lightweight networks. This paper proposes a novel lightweight feature self-fusion convolutional (SFC) module, which consists of self-fusion and point-wise convolution. The core of SFC is a three-step self-fusion. First, each input feature map is expanded to a high dimensional space individually, prohibiting connections with other input channels. Then, in the second step, we fuse all features from the same input in the high dimensional space to enhance the representation ability. Finally, we compress high dimensional features to a low dimensional space. After self-fusion, we connect all features by one point-wise convolution. Compared to inverted bottleneck, SFC module decreases the number of parameters by replacing the dense connections among channels with self-fusion. To the best of our knowledge, SFC is the first method to build lightweight networks by feature self-fusion.

We then build a new network namely SFC-Net, by stacking SFC modules. Experimental results on the CIFAR and downsampled ImageNet datasets demonstrate our SFC-Net achieves better performance to some previous popular CNNs with fewer number of parameters and achieves comparable performance compared to other previous lightweight architectures. The code is available at https://github.com/Yankeegsj/Self-fusion.
Original languageEnglish
Pages (from-to)50-55
Number of pages6
JournalPattern Recognition Letters
Volume152
DOIs
Publication statusPublished - Dec 2021

User-Defined Keywords

  • Lightweight neural networks
  • Efficient feature fusion
  • Image classification

Fingerprint

Dive into the research topics of 'Self-Fusion Convolutional Neural Networks'. Together they form a unique fingerprint.

Cite this