论文标题
探索深神经网络的脆弱性:参数腐败的研究
Exploring the Vulnerability of Deep Neural Networks: A Study of Parameter Corruption
论文作者
论文摘要
我们认为,模型参数的脆弱性对于模型鲁棒性和泛化的研究至关重要,但很少有研究致力于理解这一问题。在这项工作中,我们提出了一个指标,通过通过参数腐败利用其脆弱性来衡量神经网络参数的鲁棒性。所提出的指标描述了参数腐败下非平凡最坏情况下的最大损失变化。出于实际目的,我们给出了基于梯度的估计,该估计比随机腐败试验更有效,而随机腐败试验几乎不会引起最坏的准确性降解。配备了理论支持和经验验证,我们能够系统地研究不同模型参数的鲁棒性,并揭示了以前很少关注的深神经网络的脆弱性。此外,我们可以通过拟议的抗逆腐败训练来相应地增强模型,这不仅可以提高参数鲁棒性,而且可以转化为准确性高程。
We argue that the vulnerability of model parameters is of crucial value to the study of model robustness and generalization but little research has been devoted to understanding this matter. In this work, we propose an indicator to measure the robustness of neural network parameters by exploiting their vulnerability via parameter corruption. The proposed indicator describes the maximum loss variation in the non-trivial worst-case scenario under parameter corruption. For practical purposes, we give a gradient-based estimation, which is far more effective than random corruption trials that can hardly induce the worst accuracy degradation. Equipped with theoretical support and empirical validation, we are able to systematically investigate the robustness of different model parameters and reveal vulnerability of deep neural networks that has been rarely paid attention to before. Moreover, we can enhance the models accordingly with the proposed adversarial corruption-resistant training, which not only improves the parameter robustness but also translates into accuracy elevation.