论文标题
与变压器有关事实链条的自回归推理
Autoregressive Reasoning over Chains of Facts with Transformers
论文作者
论文摘要
本文提出了一种用于多跳解释再生的迭代推论算法,该算法以文本摘要的形式检索了相关的事实证据,鉴于自然语言问题及其答案。当需要进行推理的数量增长时,将多种证据或事实的多个证据或事实结合在一起变得越来越困难。我们的算法通过分解从语料库的自动加工的事实选择来应对这一问题,从而根据先前选择的事实调节下一次迭代。这使我们能够使用成对的学习级损失。我们在TextGraphs 2019和2020年共享的任务中验证我们的方法以进行解释。在此任务上的现有工作要么孤立地评估事实,要么人为地限制了事实的链条,从而限制了多跳推断。我们证明,当与预训练的变压器模型一起使用时,就精确,训练时间和推理效率而言,算法优于先前的最先进。
This paper proposes an iterative inference algorithm for multi-hop explanation regeneration, that retrieves relevant factual evidence in the form of text snippets, given a natural language question and its answer. Combining multiple sources of evidence or facts for multi-hop reasoning becomes increasingly hard when the number of sources needed to make an inference grows. Our algorithm copes with this by decomposing the selection of facts from a corpus autoregressively, conditioning the next iteration on previously selected facts. This allows us to use a pairwise learning-to-rank loss. We validate our method on datasets of the TextGraphs 2019 and 2020 Shared Tasks for explanation regeneration. Existing work on this task either evaluates facts in isolation or artificially limits the possible chains of facts, thus limiting multi-hop inference. We demonstrate that our algorithm, when used with a pre-trained transformer model, outperforms the previous state-of-the-art in terms of precision, training time and inference efficiency.