Convolution Filter Compression via Sparse Linear Combinations of Quantized Basis

Weichao Lan, Yiu-ming Cheung*, Liang Lan, Juyong Jiang, Zhikai Hu

*Corresponding author for this work

Research output: Contribution to journalJournal articlepeer-review

Abstract

Convolutional neural networks (CNNs) have achieved significant performance on various real-life tasks. However, the large number of parameters in convolutional layers requires huge storage and computation resources, making it challenging to deploy CNNs on memory-constrained embedded devices. In this article, we propose a novel compression method that generates the convolution filters in each layer using a set of learnable low-dimensional quantized filter bases. The proposed method reconstructs the convolution filters by stacking the linear combinations of these filter bases. By using quantized values in weights, the compact filters can be represented using fewer bits so that the network can be highly compressed. Furthermore, we explore the sparsity of coefficients through L_1 -ball projection when conducting linear combination to further reduce the storage consumption and prevent overfitting. We also provide a detailed analysis of the compression performance of the proposed method. Evaluations of image classification and object detection tasks using various network structures demonstrate that the proposed method achieves a higher compression ratio with comparable accuracy compared with the existing state-of-the-art filter decomposition and network quantization methods.

Original languageEnglish
Pages (from-to)1-14
Number of pages14
JournalIEEE Transactions on Neural Networks and Learning Systems
DOIs
Publication statusE-pub ahead of print - 24 Sept 2024

Scopus Subject Areas

  • Software
  • Artificial Intelligence
  • Computer Networks and Communications
  • Computer Science Applications

User-Defined Keywords

  • Filter decomposition
  • network compression
  • quantization

Cite this