论文标题
Edropout:基于能量的辍学和修剪深神经网络
EDropout: Energy-Based Dropout and Pruning of Deep Neural Networks
论文作者
论文摘要
辍学是一种众所周知的正则化方法,它通过从较大的深神经网络中采样子网络并在数据的不同子集上训练不同的子网络。受辍学概念的启发,我们提出了Edropout作为基于能量的框架,用于在分类任务中修剪神经网络。在这种方法中,一组二元修剪状态向量(人口)代表了一组来自任意提供的原始神经网络的相应子网络。能量损耗函数为每个修剪状态分配标量能损失值。基于能量的模型随机地进化了种群,以找到能量损失较低的状态。然后选择最佳的修剪状态并应用于原始网络。与辍学类似,使用概率模型中的反向传播更新了保留权重。基于能量的模型再次搜索更好的修剪状态和循环连续。实际上,此过程实际上是在管理修剪状态的能量模型之间切换的,而概率模型则在每种迭代中更新了暂时未经修复的权重。人口可以动态融合到修剪状态。这可以解释为辍学,从而使网络修剪。从实现的角度来看,Edropout可以修改典型的神经网络,而无需修改网络体系结构。我们在Kuzushiji,Fashion,Cifar-10,Cifar-100和Flowers DataSet上评估了针对不同口味,Alexnet和Squeezenet的不同口味的方法,并比较了模型的修剪率和分类性能。平均而言,经过Edropout培训的网络的修剪率超过$ 50 \%的可训练参数,分别为$ <5 \%$和$ <1 \%\%的TOP-1和TOP-5分类精度。
Dropout is a well-known regularization method by sampling a sub-network from a larger deep neural network and training different sub-networks on different subsets of the data. Inspired by the dropout concept, we propose EDropout as an energy-based framework for pruning neural networks in classification tasks. In this approach, a set of binary pruning state vectors (population) represents a set of corresponding sub-networks from an arbitrary provided original neural network. An energy loss function assigns a scalar energy loss value to each pruning state. The energy-based model stochastically evolves the population to find states with lower energy loss. The best pruning state is then selected and applied to the original network. Similar to dropout, the kept weights are updated using backpropagation in a probabilistic model. The energy-based model again searches for better pruning states and the cycle continuous. Indeed, this procedure is in fact switching between the energy model, which manages the pruning states, and the probabilistic model, which updates the temporarily unpruned weights, in each iteration. The population can dynamically converge to a pruning state. This can be interpreted as dropout leading to pruning the network. From an implementation perspective, EDropout can prune typical neural networks without modification of the network architecture. We evaluated the proposed method on different flavours of ResNets, AlexNet, and SqueezeNet on the Kuzushiji, Fashion, CIFAR-10, CIFAR-100, and Flowers datasets, and compared the pruning rate and classification performance of the models. On average the networks trained with EDropout achieved a pruning rate of more than $50\%$ of the trainable parameters with approximately $<5\%$ and $<1\%$ drop of Top-1 and Top-5 classification accuracy, respectively.