Filter pruning by image channel reduction in pre-trained convolutional neural networks

Gi Su Chung, Chee Sun Won

Research output: Contribution to journalArticlepeer-review

5 Scopus citations

Abstract

There are domain-specific image classification problems such as facial emotion and house-number classifications, where the color information in the images may not be crucial for recognition. This motivates us to convert RGB images to gray-scale ones with a single Y channel to be fed into the pre-trained convolutional neural networks (CNN). Now, since the existing CNN models are pre-trained by three-channel color images, one can expect that some trained filters are more sensitive to colors than brightness. Therefore, adopting the single-channel gray-scale images as inputs, we can prune out some of the convolutional filters in the first layer of the pre-trained CNN. This first-layer pruning greatly facilitates the filter compression of the subsequent convolutional layers. Now, the pre-trained CNN with the compressed filters is fine-tuned with the single-channel images for a domain-specific dataset. Experimental results on the facial emotion and Street View House Numbers (SVHN) datasets show that we can achieve a significant compression of the pre-trained CNN filters by the proposed method. For example, compared with the fine-tuned VGG-16 model by color images, we can save 10.538 GFLOPs computations, while keeping the classification accuracy around 84% for the facial emotion RAF-DB dataset.

Original languageEnglish
Pages (from-to)30817-30826
Number of pages10
JournalMultimedia Tools and Applications
Volume80
Issue number20
DOIs
StatePublished - Aug 2021

Keywords

  • CNN filter compression
  • Facial emotion classification
  • Image channel reduction
  • Network pruning

Fingerprint

Dive into the research topics of 'Filter pruning by image channel reduction in pre-trained convolutional neural networks'. Together they form a unique fingerprint.

Cite this