论文标题
硬件加速器,用于对深度学习神经网络的对抗性攻击
Hardware Accelerator for Adversarial Attacks on Deep Learning Neural Networks
论文作者
论文摘要
最近的研究表明,深度学习神经网络(DNN)容易受到微妙的扰动的影响,这对人类视觉系统不可感知,但可以欺骗DNN模型并导致错误的产出。已经提出了一类对抗攻击网络算法,以在不同情况下产生强大的物理扰动。这些算法是通过提供培训未来防御网络的途径来推动安全深度学习的首要努力,但是,它们的内在复杂性阻止了它们的更广泛使用。 在本文中,我们提出了第一个基于Memristor横杆阵列的对抗攻击的硬件加速器。我们的设计大大改善了视觉对抗性扰动系统的吞吐量,这可以进一步改善未来深度学习系统的鲁棒性和安全性。基于算法唯一性,我们为对抗攻击加速器($ a^3 $)提出了四个实现,以提高吞吐量,能源效率和计算效率。
Recent studies identify that Deep learning Neural Networks (DNNs) are vulnerable to subtle perturbations, which are not perceptible to human visual system but can fool the DNN models and lead to wrong outputs. A class of adversarial attack network algorithms has been proposed to generate robust physical perturbations under different circumstances. These algorithms are the first efforts to move forward secure deep learning by providing an avenue to train future defense networks, however, the intrinsic complexity of them prevents their broader usage. In this paper, we propose the first hardware accelerator for adversarial attacks based on memristor crossbar arrays. Our design significantly improves the throughput of a visual adversarial perturbation system, which can further improve the robustness and security of future deep learning systems. Based on the algorithm uniqueness, we propose four implementations for the adversarial attack accelerator ($A^3$) to improve the throughput, energy efficiency, and computational efficiency.