论文标题

通过使用自我监督的深度估计,通过使用多任务训练来改善语义分割的噪声和攻击鲁棒性

Improved Noise and Attack Robustness for Semantic Segmentation by Using Multi-Task Training with Self-Supervised Depth Estimation

论文作者

Klingner, Marvin, Bär, Andreas, Fingscheidt, Tim

论文摘要

尽管当前的神经网络训练方法通常旨在提高性能,但较少的重点放在旨在以鲁棒性或对抗性示例的噪声条件或定向攻击的训练方法上。在本文中,我们建议通过多任务培训来提高鲁棒性,该培训通过对未标记的视频进行自我监督的单眼深度估计来扩展监督的语义细分。仅在训练期间执行此附加任务,以在几个输入扰动下在测试时间提高语义细分模型的鲁棒性。此外,我们甚至发现我们的联合培训方法还提高了模型对原始(监督)语义细分任务的性能。我们的评估表现出一种特殊的新颖性,因为它允许将输入噪声和对抗性攻击对语义分割的鲁棒性的影响相互比较。我们在CityScapes数据集上展示了我们的方法的有效性,在此,我们的多任务训练方法始终优于鲁棒性,噪声和对抗性攻击,而不需要对训练中的深度标签,从而超过了单任务语义分段基线。

While current approaches for neural network training often aim at improving performance, less focus is put on training methods aiming at robustness towards varying noise conditions or directed attacks by adversarial examples. In this paper, we propose to improve robustness by a multi-task training, which extends supervised semantic segmentation by a self-supervised monocular depth estimation on unlabeled videos. This additional task is only performed during training to improve the semantic segmentation model's robustness at test time under several input perturbations. Moreover, we even find that our joint training approach also improves the performance of the model on the original (supervised) semantic segmentation task. Our evaluation exhibits a particular novelty in that it allows to mutually compare the effect of input noises and adversarial attacks on the robustness of the semantic segmentation. We show the effectiveness of our method on the Cityscapes dataset, where our multi-task training approach consistently outperforms the single-task semantic segmentation baseline in terms of both robustness vs. noise and in terms of adversarial attacks, without the need for depth labels in training.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源