论文标题
可扩展认证鲁棒性及以后的自动扰动分析
Automatic Perturbation Analysis for Scalable Certified Robustness and Beyond
论文作者
论文摘要
基于线性放松的神经网络基于线性放松分析(LIRPA),该神经网络计算出一定程度的输入扰动,该神经网络可在稳健性验证和认证的防御方面计算出可证明的输出神经元的线性界限。大多数基于LIRPA的方法都集中在简单的前馈网络上,并且在扩展到其他体系结构时需要特定的手动推导和实现。在本文中,我们开发了一个自动框架,以通过概括现有的LIRPA算法(例如Crown)来对任何神经网络结构进行扰动分析以在一般计算图上操作。我们框架的灵活性,可不同的性能和易用性使我们能够在LIRPA的基于LIRPA的认证辩护上获得对相当复杂的网络(如Densenet,Resnext和Transformer)的最新结果,这些防御不受先前工作的支持。我们的框架还可以使损失融合,该技术可大大降低LIRPA的计算复杂性以进行认证的防御。我们首次在微小的成像网和缩小的成像网上演示了基于LIRPA的认证防御,因此由于类别相对较大的类别,以前的方法无法扩展。我们的工作还为社区提供了一个开源库,可以将LIRPA应用于经过认证的防御领域,而没有太多的LIRPA专业知识,例如,我们通过将LIRPA应用于网络参数来创建具有可能平坦的优化景观的神经网络。我们的OpenSource库可在https://github.com/kaidixu/auto_lirpa上找到。
Linear relaxation based perturbation analysis (LiRPA) for neural networks, which computes provable linear bounds of output neurons given a certain amount of input perturbation, has become a core component in robustness verification and certified defense. The majority of LiRPA-based methods focus on simple feed-forward networks and need particular manual derivations and implementations when extended to other architectures. In this paper, we develop an automatic framework to enable perturbation analysis on any neural network structures, by generalizing existing LiRPA algorithms such as CROWN to operate on general computational graphs. The flexibility, differentiability and ease of use of our framework allow us to obtain state-of-the-art results on LiRPA based certified defense on fairly complicated networks like DenseNet, ResNeXt and Transformer that are not supported by prior works. Our framework also enables loss fusion, a technique that significantly reduces the computational complexity of LiRPA for certified defense. For the first time, we demonstrate LiRPA based certified defense on Tiny ImageNet and Downscaled ImageNet where previous approaches cannot scale to due to the relatively large number of classes. Our work also yields an open-source library for the community to apply LiRPA to areas beyond certified defense without much LiRPA expertise, e.g., we create a neural network with a probably flat optimization landscape by applying LiRPA to network parameters. Our opensource library is available at https://github.com/KaidiXu/auto_LiRPA.