论文标题
基于反事实和事实推理的学习和评估图形神经网络解释
Learning and Evaluating Graph Neural Network Explanations based on Counterfactual and Factual Reasoning
论文作者
论文摘要
结构数据很好地存在于Web应用程序中,例如社交媒体中的社交网络,学术网站中的引用网络以及在线论坛中的线程数据。由于复杂的拓扑,因此很难处理和利用此类数据中的丰富信息。图神经网络(GNN)在结构数据的学习表示方面表现出了很大的优势。但是,深度学习模型的不透明度使解释和解释GNNS做出的预测并不琐碎。同时,评估GNN的解释也是一个巨大的挑战,因为在许多情况下,基础真相解释不可用。 在本文中,我们从因果推理理论中介绍了反事实和事实(CF^2)推理的见解,以解决可解释的GNN中的学习和评估问题。为了生成解释,我们通过基于两个休闲观点中的两个优化问题提出了模型不稳定的框架。这将CF^2与以前仅考虑其中之一的可解释的GNN区分开。这项工作的另一个贡献是对GNN解释的评估。为了定量评估生成的解释而无需地面真相,我们根据反事实和事实推理设计指标,以评估解释的必要性和充分性。实验表明,无论基本真相的解释是否可用,CF^2都比实际数据集上的先前最新方法生成更好的解释。此外,统计分析证明了地面评估的性能与我们提出的指标之间的相关性。源代码可从https://github.com/chrisjtan/gnn_cff获得。
Structural data well exists in Web applications, such as social networks in social media, citation networks in academic websites, and threads data in online forums. Due to the complex topology, it is difficult to process and make use of the rich information within such data. Graph Neural Networks (GNNs) have shown great advantages on learning representations for structural data. However, the non-transparency of the deep learning models makes it non-trivial to explain and interpret the predictions made by GNNs. Meanwhile, it is also a big challenge to evaluate the GNN explanations, since in many cases, the ground-truth explanations are unavailable. In this paper, we take insights of Counterfactual and Factual (CF^2) reasoning from causal inference theory, to solve both the learning and evaluation problems in explainable GNNs. For generating explanations, we propose a model-agnostic framework by formulating an optimization problem based on both of the two casual perspectives. This distinguishes CF^2 from previous explainable GNNs that only consider one of them. Another contribution of the work is the evaluation of GNN explanations. For quantitatively evaluating the generated explanations without the requirement of ground-truth, we design metrics based on Counterfactual and Factual reasoning to evaluate the necessity and sufficiency of the explanations. Experiments show that no matter ground-truth explanations are available or not, CF^2 generates better explanations than previous state-of-the-art methods on real-world datasets. Moreover, the statistic analysis justifies the correlation between the performance on ground-truth evaluation and our proposed metrics. Source code is available at https://github.com/chrisjtan/gnn_cff.