论文标题

强大的身体对抗例子针对实际的面部识别系统

Powerful Physical Adversarial Examples Against Practical Face Recognition Systems

论文作者

Singh, Inderjeet, Araki, Toshinori, Kakizaki, Kazuya

论文摘要

众所周知,最现有的机器学习(ML)基于安全性至关重要的应用程序容易受到精心设计的输入实例,称为对抗性示例(AXS)。对手可以方便地从数字和物理世界中攻击这些目标系统。本文的目的是针对面部识别系统产生强大的物理轴。我们提出了一种新颖的平滑度损失功能和贴片噪声组合攻击,以实现强大的物理斧头。平滑度损失插入了攻击生成过程中延迟约束的概念,从而使优化复杂性更好地处理了物理领域的轴。补丁噪声组合攻击结合了贴片噪声和来自不同分布的小声音,以生成强大的基于注册的物理轴。广泛的实验分析发现,与常规技术相比,我们的平滑度损失导致鲁棒和更可转移的数字和物理轴。值得注意的是,我们的平滑度损失分别在物理白框和黑盒攻击中的平均攻击成功率(ASR)的1.17和1.97倍。我们的补丁噪声组合攻击进一步增强了性能,并在物理世界白色盒和黑盒攻击中的平均ASR增长和4.74倍。

It is well-known that the most existing machine learning (ML)-based safety-critical applications are vulnerable to carefully crafted input instances called adversarial examples (AXs). An adversary can conveniently attack these target systems from digital as well as physical worlds. This paper aims to the generation of robust physical AXs against face recognition systems. We present a novel smoothness loss function and a patch-noise combo attack for realizing powerful physical AXs. The smoothness loss interjects the concept of delayed constraints during the attack generation process, thereby causing better handling of optimization complexity and smoother AXs for the physical domain. The patch-noise combo attack combines patch noise and imperceptibly small noises from different distributions to generate powerful registration-based physical AXs. An extensive experimental analysis found that our smoothness loss results in robust and more transferable digital and physical AXs than the conventional techniques. Notably, our smoothness loss results in a 1.17 and 1.97 times better mean attack success rate (ASR) in physical white-box and black-box attacks, respectively. Our patch-noise combo attack furthers the performance gains and results in 2.39 and 4.74 times higher mean ASR than conventional technique in physical world white-box and black-box attacks, respectively.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源