论文标题

对单眼深度估计的对抗性攻击

Adversarial Attacks on Monocular Depth Estimation

论文作者

Zhang, Ziqi, Zhu, Xinge, Li, Yingwei, Chen, Xiangqun, Guo, Yao

论文摘要

深度学习的最新进展在许多计算机视觉任务(例如语义细分和深度估计)上带来了出色的表现。但是,深度神经网络对对抗性例子的脆弱性引起了对现实部署的严重关切。在本文中,我们据我们所知,对单眼深度估计的对抗性攻击进行了首次系统研究,这是在自动驾驶和机器人导航等场景中3D场景理解的重要任务。为了了解对抗性攻击对深度估计的影响,我们首先定义了对深度估计的不同攻击方案的分类法,包括非针对性的攻击,有针对性的攻击和普遍攻击。然后,我们适应了几种最先进的攻击方法,以在深度估计领域进行分类。此外,还引入了多任务攻击,以进一步提高通用攻击的攻击性能。实验结果表明,在深度估计中可能会产生重大错误。特别是,我们证明我们的方法可以对给定物体(例如汽车)进行有针对性的攻击,从而使深度估计距离地面真相(例如20m至80m)。

Recent advances of deep learning have brought exceptional performance on many computer vision tasks such as semantic segmentation and depth estimation. However, the vulnerability of deep neural networks towards adversarial examples have caused grave concerns for real-world deployment. In this paper, we present to the best of our knowledge the first systematic study of adversarial attacks on monocular depth estimation, an important task of 3D scene understanding in scenarios such as autonomous driving and robot navigation. In order to understand the impact of adversarial attacks on depth estimation, we first define a taxonomy of different attack scenarios for depth estimation, including non-targeted attacks, targeted attacks and universal attacks. We then adapt several state-of-the-art attack methods for classification on the field of depth estimation. Besides, multi-task attacks are introduced to further improve the attack performance for universal attacks. Experimental results show that it is possible to generate significant errors on depth estimation. In particular, we demonstrate that our methods can conduct targeted attacks on given objects (such as a car), resulting in depth estimation 3-4x away from the ground truth (e.g., from 20m to 80m).

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源