论文标题
使用注意机制学习基于DCNN的图像分类器的视觉解释
Learning Visual Explanations for DCNN-Based Image Classifiers Using an Attention Mechanism
论文作者
论文摘要
在本文中提出了两种基于学习的新的可解释的AI(XAI)方法,用于深卷积神经网络(DCNN)图像分类器,称为L-CAM-FM和L-CAM-IMG。两种方法都使用了一种注意机制,该机制插入了原始(冷冻)DCNN中,并经过训练以从最后一个卷积层的特征图中得出类激活图(CAM)。在训练过程中,将CAM应用于特征图(L-CAM-FM)或输入图像(L-CAM-IMG),迫使注意机制学习了解释DCNN结果的图像区域。对Imagenet的实验评估表明,所提出的方法在推理阶段需要单个前进的同时获得竞争结果。此外,根据派生的解释,进行了全面的定性分析,为了解分类错误背后的原因,包括影响训练有素的分类器的可能数据集偏见。
In this paper two new learning-based eXplainable AI (XAI) methods for deep convolutional neural network (DCNN) image classifiers, called L-CAM-Fm and L-CAM-Img, are proposed. Both methods use an attention mechanism that is inserted in the original (frozen) DCNN and is trained to derive class activation maps (CAMs) from the last convolutional layer's feature maps. During training, CAMs are applied to the feature maps (L-CAM-Fm) or the input image (L-CAM-Img) forcing the attention mechanism to learn the image regions explaining the DCNN's outcome. Experimental evaluation on ImageNet shows that the proposed methods achieve competitive results while requiring a single forward pass at the inference stage. Moreover, based on the derived explanations a comprehensive qualitative analysis is performed providing valuable insight for understanding the reasons behind classification errors, including possible dataset biases affecting the trained classifier.