论文标题
加权平均精度:自动驾驶汽车视觉感知中的对抗示例检测
Weighted Average Precision: Adversarial Example Detection in the Visual Perception of Autonomous Vehicles
论文作者
论文摘要
最近的作品表明,神经网络容易受到精心制作的对抗例子(AE)的攻击。通过向输入图像添加小的扰动,AE可以使受害者模型预测不正确的输出。对抗机器学习中的几项研究工作开始专注于自动驾驶中AE的检测。但是,现有的研究要么对检测的输出使用初步假设,要么忽略了感知管道中的跟踪系统。在本文中,我们首先提出了一种新颖的距离度量,用于实用的自动驾驶对象检测输出。然后,我们通过提供时间检测算法来弥合当前AE检测研究与现实世界自动源系统之间的差距,该算法会考虑跟踪系统的影响。我们对Berkeley Deep Drive(BDD)和CityScapes数据集进行评估,以显示我们的方法如何通过提高性能的17.76%的精度来优于现有的基于单帧映射的AE检测。
Recent works have shown that neural networks are vulnerable to carefully crafted adversarial examples (AE). By adding small perturbations to input images, AEs are able to make the victim model predicts incorrect outputs. Several research work in adversarial machine learning started to focus on the detection of AEs in autonomous driving. However, the existing studies either use preliminary assumption on outputs of detections or ignore the tracking system in the perception pipeline. In this paper, we firstly propose a novel distance metric for practical autonomous driving object detection outputs. Then, we bridge the gap between the current AE detection research and the real-world autonomous systems by providing a temporal detection algorithm, which takes the impact of tracking system into consideration. We perform evaluation on Berkeley Deep Drive (BDD) and CityScapes datasets to show how our approach outperforms existing single-frame-mAP based AE detections by increasing 17.76% accuracy of performance.