论文标题
通过显着意识的图形学习探索忠实的理由进行多跳的事实验证
Exploring Faithful Rationale for Multi-hop Fact Verification via Salience-Aware Graph Learning
论文作者
论文摘要
多跳事实验证模型的不透明性毫无疑问地要求解释性。一种可行的方法是提取理化,这是输入的子集,其中预测的性能在删除时会急剧下降。尽管可以解释,但大多数用于多跳事实验证的理由提取方法都可以单独探索每个证据中的语义信息,同时忽略不同证据之间的拓扑信息相互作用。凭直觉,忠实的理由带有补充信息,能够通过多跳的推理过程提取其他理由。为了解决此类缺点,我们将可解释的多跳事实验证(例如子图提取)进行了,可以根据图形卷积网络(GCN)通过显着性吸引的图形学习来解决。在具体而言,GCN用于将拓扑相互作用信息纳入多个证据以进行学习证据表示。同时,为了减轻嘈杂的证据的影响,诱导了GCN的信息传递的显着图形扰动。此外,具有三个诊断性诊断属性的多任务模型经过精心设计,旨在提高解释的质量,而无需任何明确的注释。对狂热基准的实验结果表明,与先前的最新方法相比,基本原理提取和事实验证的增长幅度很大。
The opaqueness of the multi-hop fact verification model imposes imperative requirements for explainability. One feasible way is to extract rationales, a subset of inputs, where the performance of prediction drops dramatically when being removed. Though being explainable, most rationale extraction methods for multi-hop fact verification explore the semantic information within each piece of evidence individually, while ignoring the topological information interaction among different pieces of evidence. Intuitively, a faithful rationale bears complementary information being able to extract other rationales through the multi-hop reasoning process. To tackle such disadvantages, we cast explainable multi-hop fact verification as subgraph extraction, which can be solved based on graph convolutional network (GCN) with salience-aware graph learning. In specific, GCN is utilized to incorporate the topological interaction information among multiple pieces of evidence for learning evidence representation. Meanwhile, to alleviate the influence of noisy evidence, the salience-aware graph perturbation is induced into the message passing of GCN. Moreover, the multi-task model with three diagnostic properties of rationale is elaborately designed to improve the quality of an explanation without any explicit annotations. Experimental results on the FEVEROUS benchmark show significant gains over previous state-of-the-art methods for both rationale extraction and fact verification.