论文标题
Fawkes:保护隐私免受未经授权的深度学习模型
Fawkes: Protecting Privacy against Unauthorized Deep Learning Models
论文作者
论文摘要
当今强大的面部识别系统的扩散对个人隐私构成了真正的威胁。正如Clearview.ai所表明的那样,任何人都可以在互联网上获取Internet,并在不知情的情况下训练个人的面部识别模型。我们需要工具来保护自己免受未经授权的面部识别系统的潜在滥用。不幸的是,没有实际或有效的解决方案。 在本文中,我们提出了Fawkes,该系统可以帮助个人接种未经授权的面部识别模型的图像。 Fawkes通过帮助用户在发布之前添加不可察觉的像素级更改(我们将其称为“披风”)来实现这一目标。当用于训练面部识别模型时,这些“掩饰”图像会产生功能模型,这些模型始终导致用户的正常图像被错误识别。我们在实验上证明,Fawkes不管跟踪器如何训练模型,都可以提供95+%的保护,以防止用户识别。即使将清洁,未粘的图像“泄漏”到跟踪器并用于训练时,Fawkes仍然可以保持80+%的保护成功率。我们在针对当今最先进的面部识别服务的实验中取得了100%的成功。最后,我们表明,Fawkes对试图检测或破坏图像斗篷的各种对策非常有力。
Today's proliferation of powerful facial recognition systems poses a real threat to personal privacy. As Clearview.ai demonstrated, anyone can canvas the Internet for data and train highly accurate facial recognition models of individuals without their knowledge. We need tools to protect ourselves from potential misuses of unauthorized facial recognition systems. Unfortunately, no practical or effective solutions exist. In this paper, we propose Fawkes, a system that helps individuals inoculate their images against unauthorized facial recognition models. Fawkes achieves this by helping users add imperceptible pixel-level changes (we call them "cloaks") to their own photos before releasing them. When used to train facial recognition models, these "cloaked" images produce functional models that consistently cause normal images of the user to be misidentified. We experimentally demonstrate that Fawkes provides 95+% protection against user recognition regardless of how trackers train their models. Even when clean, uncloaked images are "leaked" to the tracker and used for training, Fawkes can still maintain an 80+% protection success rate. We achieve 100% success in experiments against today's state-of-the-art facial recognition services. Finally, we show that Fawkes is robust against a variety of countermeasures that try to detect or disrupt image cloaks.