论文标题

两条复发网络,用于隔离视频中的深击

Two-branch Recurrent Network for Isolating Deepfakes in Videos

论文作者

Masi, Iacopo, Killekar, Aditya, Mascarenhas, Royston Marian, Gurudatt, Shenoy Pratik, AbdAlmageed, Wael

论文摘要

当前使用DeepFakes人为生成的超现实面孔的尖峰呼吁针对视频流进行量身定制的媒体取证解决方案,并在视频级别上可靠地工作,虚假警报率低。我们提出了一种基于两个分支网络结构的深泡检测方法,该方法通过学习放大伪影的同时抑制高级面部含量来隔离数字操纵的面孔。与当前提取空间频率作为预处理步骤的当前方法不同,我们提出了一个两分支结构:一个分支传播原始信息,而另一个分支则抑制面部内容,但使用高斯(Log)(log)(log)作为瓶颈层放大了多波段频率。为了更好地隔离操纵的面孔,我们得出了一种新颖的成本功能,该功能与常规分类不同,可以压缩自然面的变异性,并将特征空间中不切实际的面部样品推开。与先前的工作相比,我们的两个小说组件在FaceForensics ++,Celeb-DF和Facebook的DFDC预览基准方面显示了有希望的结果。然后,我们对网络架构和成本功能进行完整,详细的消融研究。最后,尽管栏仍然很高,可以以非常低的虚假警报率获得非常出色的数字,但我们的研究表明,在视频级别的AUC方面进行交叉测试时,我们可以实现良好的视频水平性能。

The current spike of hyper-realistic faces artificially generated using deepfakes calls for media forensics solutions that are tailored to video streams and work reliably with a low false alarm rate at the video level. We present a method for deepfake detection based on a two-branch network structure that isolates digitally manipulated faces by learning to amplify artifacts while suppressing the high-level face content. Unlike current methods that extract spatial frequencies as a preprocessing step, we propose a two-branch structure: one branch propagates the original information, while the other branch suppresses the face content yet amplifies multi-band frequencies using a Laplacian of Gaussian (LoG) as a bottleneck layer. To better isolate manipulated faces, we derive a novel cost function that, unlike regular classification, compresses the variability of natural faces and pushes away the unrealistic facial samples in the feature space. Our two novel components show promising results on the FaceForensics++, Celeb-DF, and Facebook's DFDC preview benchmarks, when compared to prior work. We then offer a full, detailed ablation study of our network architecture and cost function. Finally, although the bar is still high to get very remarkable figures at a very low false alarm rate, our study shows that we can achieve good video-level performance when cross-testing in terms of video-level AUC.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源