论文标题

关于Convmixer模型的对抗性可传递性

On the Adversarial Transferability of ConvMixer Models

论文作者

Iijima, Ryota, Tanaka, Miki, Echizen, Isao, Kiya, Hitoshi

论文摘要

深度神经网络(DNN)众所周知,很容易受到对抗性例子的影响(AES)。此外,AE具有对抗性可传递性,这意味着为源模型生成的AE可以以非平凡的概率欺骗另一个黑框模型(目标模型)。在本文中,我们首次调查了包括Convmixer在内的模型之间的对抗性可传递性的属性。为了客观地验证可转让性的属性,使用称为AutoAttack的基准攻击方法评估模型的鲁棒性。在图像分类实验中,Convmixer被确认对对抗性转移性很弱。

Deep neural networks (DNNs) are well known to be vulnerable to adversarial examples (AEs). In addition, AEs have adversarial transferability, which means AEs generated for a source model can fool another black-box model (target model) with a non-trivial probability. In this paper, we investigate the property of adversarial transferability between models including ConvMixer, which is an isotropic network, for the first time. To objectively verify the property of transferability, the robustness of models is evaluated by using a benchmark attack method called AutoAttack. In an image classification experiment, ConvMixer is confirmed to be weak to adversarial transferability.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源