论文标题
可解释的深度一级分类
Explainable Deep One-Class Classification
论文作者
论文摘要
深层的一级分类变体用于异常检测学习映射,该映射集中在特征空间中,导致异常被映射。因为这种转变是高度非线性的,所以找到解释是一个重大挑战。在本文中,我们提出了一种可解释的深层单级分类方法,即完全卷积的数据描述(FCDD),其中映射样品本身也是一个解释热图。 FCDD产生了竞争性检测性能,并提供了CIFAR-10和Imagenet的常见异常检测基准的合理解释。在MVTEC-AD上,最近提供地面异常图的制造数据集,FCDD在无监督的环境中设置了新的最新技术。我们的方法可以在训练过程中纳入基地真相的异常图,甚至使用其中一些(〜5)可显着提高性能。最后,使用FCDD的解释,我们证明了深度单级分类模型与诸如图像水印等虚假图像特征的脆弱性。
Deep one-class classification variants for anomaly detection learn a mapping that concentrates nominal samples in feature space causing anomalies to be mapped away. Because this transformation is highly non-linear, finding interpretations poses a significant challenge. In this paper we present an explainable deep one-class classification method, Fully Convolutional Data Description (FCDD), where the mapped samples are themselves also an explanation heatmap. FCDD yields competitive detection performance and provides reasonable explanations on common anomaly detection benchmarks with CIFAR-10 and ImageNet. On MVTec-AD, a recent manufacturing dataset offering ground-truth anomaly maps, FCDD sets a new state of the art in the unsupervised setting. Our method can incorporate ground-truth anomaly maps during training and using even a few of these (~5) improves performance significantly. Finally, using FCDD's explanations we demonstrate the vulnerability of deep one-class classification models to spurious image features such as image watermarks.