论文标题
对抗划痕:可部署攻击CNN分类器
Adversarial Scratches: Deployable Attacks to CNN Classifiers
论文作者
论文摘要
越来越多的工作表明,深层神经网络容易受到对抗例子的影响。这些采用适用于模型输入的小扰动的形式,这导致了错误的预测。不幸的是,大多数文献都集中在视觉上不可见量的扰动上,该扰动将应用于数字图像上,这些数字图像通常无法通过设计将其部署到物理目标上。我们提出了对抗性的划痕:一种新颖的L0黑盒攻击,它采用图像中的划痕形式,并且比其他最先进的攻击具有更大的可部署性。对抗性划痕会利用Bézier曲线,以减少搜索空间的维度,并可能将攻击限制为特定位置。我们在几种情况下测试了对抗划痕,包括公开可用的API和流量标志的图像。结果表明,我们的攻击通常比其他可部署的最先进方法更高的愚弄率更高,同时需要更少的查询并修改很少的像素。
A growing body of work has shown that deep neural networks are susceptible to adversarial examples. These take the form of small perturbations applied to the model's input which lead to incorrect predictions. Unfortunately, most literature focuses on visually imperceivable perturbations to be applied to digital images that often are, by design, impossible to be deployed to physical targets. We present Adversarial Scratches: a novel L0 black-box attack, which takes the form of scratches in images, and which possesses much greater deployability than other state-of-the-art attacks. Adversarial Scratches leverage Bézier Curves to reduce the dimension of the search space and possibly constrain the attack to a specific location. We test Adversarial Scratches in several scenarios, including a publicly available API and images of traffic signs. Results show that, often, our attack achieves higher fooling rate than other deployable state-of-the-art methods, while requiring significantly fewer queries and modifying very few pixels.