AutoPruner: An end-to-end trainable filter pruning method for efficient deep model inference

•Filter selection and model fine-tuning are integrated into a single end-to-end trainable framework.•Adaptive compression ratio and multi-layer compression.•Good generalization ability. Channel pruning is an important method to speed up CNN model’s inference. Previous filter pruning algorithms regar...

Full description

Saved in:
Bibliographic Details
Published in:Pattern recognition Vol. 107; p. 107461
Main Authors: Luo, Jian-Hao, Wu, Jianxin
Format: Journal Article
Language:English
Published: Elsevier Ltd 01-11-2020
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:•Filter selection and model fine-tuning are integrated into a single end-to-end trainable framework.•Adaptive compression ratio and multi-layer compression.•Good generalization ability. Channel pruning is an important method to speed up CNN model’s inference. Previous filter pruning algorithms regard importance evaluation and model fine-tuning as two independent steps. This paper argues that combining them into a single end-to-end trainable system will lead to better results. We propose an efficient channel selection layer, namely AutoPruner, to find less important filters automatically in a joint training manner. Our AutoPruner takes previous activation responses as an input and generates a true binary index code for pruning. Hence, all the filters corresponding to zero index values can be removed safely after training. By gradually erasing several unimportant filters, we can prevent an excessive drop in model accuracy. Compared with previous state-of-the-art pruning algorithms (including training from scratch), AutoPruner achieves significantly better performance. Furthermore, ablation experiments show that the proposed novel mini-batch pooling and binarization operations are vital for the success of model pruning.
ISSN:0031-3203
1873-5142
DOI:10.1016/j.patcog.2020.107461