论文标题

重新思考基于类歧视的CNN通道修剪

Rethinking Class-Discrimination Based CNN Channel Pruning

论文作者

Liu, Yuchen, Wentzlaff, David, Kung, S. Y.

论文摘要

渠道修剪已越来越多地关注网络压缩。特别是,基于类歧视的基于歧视的渠道修剪已经取得了重大进展,因为它与CNN的分类目标无缝吻合,并提供了良好的解释性。先前的工作单独提出和评估其判别功能,而没有进一步研究被采用指标的有效性。为此,我们启动了有关广泛判别功能在通道修剪的有效性的第一个研究。传统的单变量二进制统计数据(如Student t-Test)也通过直观的概括包括在我们的研究中。我们研究的获胜指标具有更大的能力,可以选择信息性渠道而不是其他最先进的方法,这是通过我们的定性和定量分析来证实的。此外,我们开发了一种flop归一化的灵敏度分析方案,以使结构修剪程序自动化。在CIFAR-10,CIFAR-100和ILSVRC-2012数据集上,与最先进的结果相比,我们的修剪模型具有更高的精度,推理成本较小。例如,在ILSVRC-2012上,我们的44.3%FLOPS-PRUNED RESNET-50仅具有0.3%的TOP-1准确性下降,这极大地胜过了技术状态。

Channel pruning has received ever-increasing focus on network compression. In particular, class-discrimination based channel pruning has made major headway, as it fits seamlessly with the classification objective of CNNs and provides good explainability. Prior works singly propose and evaluate their discriminant functions, while further study on the effectiveness of the adopted metrics is absent. To this end, we initiate the first study on the effectiveness of a broad range of discriminant functions on channel pruning. Conventional single-variate binary-class statistics like Student's T-Test are also included in our study via an intuitive generalization. The winning metric of our study has a greater ability to select informative channels over other state-of-the-art methods, which is substantiated by our qualitative and quantitative analysis. Moreover, we develop a FLOP-normalized sensitivity analysis scheme to automate the structural pruning procedure. On CIFAR-10, CIFAR-100, and ILSVRC-2012 datasets, our pruned models achieve higher accuracy with less inference cost compared to state-of-the-art results. For example, on ILSVRC-2012, our 44.3% FLOPs-pruned ResNet-50 has only a 0.3% top-1 accuracy drop, which significantly outperforms the state of the art.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源