论文标题

深入神经网络中的安全风险量化

Towards the Quantification of Safety Risks in Deep Neural Networks

论文作者

Xu, Peipei, Ruan, Wenjie, Huang, Xiaowei

论文摘要

当将深度神经网络(DNN)应用于关键部门时,已经提出了安全性问题。在本文中,我们通过要求将网络的决定与人类感知保持一致来定义安全风险。为了实现量化安全风险的一般方法,我们定义了通用安全性,并实例化以表达各种安全风险。为了量化风险,我们采取不存在安全风险的安全标准球的最大半径。最大安全半径的计算减小为其各自Lipschitz指标的计算 - 要计算的数量。除了已知的对抗性示例,可及性示例和不变示例,在本文中,我们确定了一个新的风险类别 - 不确定性示例 - 人类可以轻松地讲述,但网络不确定。我们开发了一种算法,灵感来自无衍生化优化技术,并通过GPU上的基于张量的并行化加速,以支持指标的有效计算。我们对包括ACSC-XU,MNIST,CIFAR-10和Imagenet网络在内的多个基准神经网络进行评估。实验表明,我们的方法可以根据计算的紧密度和效率来实现安全量化的竞争性能。重要的是,作为一种通用的方法,我们的方法可以应对广泛的安全风险,而无需限制神经网络的结构。

Safety concerns on the deep neural networks (DNNs) have been raised when they are applied to critical sectors. In this paper, we define safety risks by requesting the alignment of the network's decision with human perception. To enable a general methodology for quantifying safety risks, we define a generic safety property and instantiate it to express various safety risks. For the quantification of risks, we take the maximum radius of safe norm balls, in which no safety risk exists. The computation of the maximum safe radius is reduced to the computation of their respective Lipschitz metrics - the quantities to be computed. In addition to the known adversarial example, reachability example, and invariant example, in this paper we identify a new class of risk - uncertainty example - on which humans can tell easily but the network is unsure. We develop an algorithm, inspired by derivative-free optimization techniques and accelerated by tensor-based parallelization on GPUs, to support efficient computation of the metrics. We perform evaluations on several benchmark neural networks, including ACSC-Xu, MNIST, CIFAR-10, and ImageNet networks. The experiments show that, our method can achieve competitive performance on safety quantification in terms of the tightness and the efficiency of computation. Importantly, as a generic approach, our method can work with a broad class of safety risks and without restrictions on the structure of neural networks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源