论文标题
一旦进行所有对抗训练:稳健性和准确性之间的原位权衡
Once-for-All Adversarial Training: In-Situ Tradeoff between Robustness and Accuracy for Free
论文作者
论文摘要
对抗性训练及其许多变体基本上改善了深层网络的鲁棒性,但以损害标准准确性的代价。此外,训练过程很重,因此彻底探索准确性和鲁棒性之间的权衡是不切实际的。本文提出了这个新问题:如何在原位迅速校准训练有素的模型,以检查其标准和健壮精度之间可实现的权衡,而没有(重新)训练多次?我们提出的曾经是所有对抗训练(OAT)的框架是建立在创新的模型条件培训框架上的,并以控制超参数为输入。在测试时,可以在“免费”“免费”“免费标准和鲁棒精度之间调整训练的模型。作为一个重要的旋钮,我们将双批量归一化利用为单独的标准和对抗性特征统计数据,以便可以在一个模型中学习而不会降低性能。我们进一步将燕麦扩展到曾经是一个曾经是对抗性的训练和减肥(燕麦)框架,从而可以在准确性,稳健性和运行时效率之间进行联合权衡。实验表明,与在各种配置的专门训练的模型相比,燕麦/燕麦没有任何重新训练或结盟,燕麦/燕麦具有相似甚至优越的性能。我们的代码和预处理的模型可在以下网址提供:https://github.com/vita-group/once-for-all-yversarial-training。
Adversarial training and its many variants substantially improve deep network robustness, yet at the cost of compromising standard accuracy. Moreover, the training process is heavy and hence it becomes impractical to thoroughly explore the trade-off between accuracy and robustness. This paper asks this new question: how to quickly calibrate a trained model in-situ, to examine the achievable trade-offs between its standard and robust accuracies, without (re-)training it many times? Our proposed framework, Once-for-all Adversarial Training (OAT), is built on an innovative model-conditional training framework, with a controlling hyper-parameter as the input. The trained model could be adjusted among different standard and robust accuracies "for free" at testing time. As an important knob, we exploit dual batch normalization to separate standard and adversarial feature statistics, so that they can be learned in one model without degrading performance. We further extend OAT to a Once-for-all Adversarial Training and Slimming (OATS) framework, that allows for the joint trade-off among accuracy, robustness and runtime efficiency. Experiments show that, without any re-training nor ensembling, OAT/OATS achieve similar or even superior performance compared to dedicatedly trained models at various configurations. Our codes and pretrained models are available at: https://github.com/VITA-Group/Once-for-All-Adversarial-Training.