论文标题

RE2G:检索,重读,生成

Re2G: Retrieve, Rerank, Generate

论文作者

Glass, Michael, Rossiello, Gaetano, Chowdhury, Md Faisal Mahbub, Naik, Ankita Rajaram, Cai, Pengshan, Gliozzo, Alfio

论文摘要

正如GPT-3和T5所证明的那样,随着参数空间变得越来越大,变压器具有能力。但是,对于需要大量知识的任务,非参数内存允许模型随着计算成本和GPU内存需求的次线性增加而显着增长。诸如抹布和领域之类的最新模型已将检索引入条件生成。这些模型结合了从一系列语料库中的神经初始检索。我们基于这一研究,提出了RE2G,该研究将神经初始检索和重新兰加克结合到基于巴特的序列到序列的生成中。我们的阅读方法还允许从无与伦比分数的来源合并取回结果,从而使BM25的合奏和神经初始检索。为了训练我们的系统端到端,我们引入了一种新颖的知识蒸馏变体,以在目标序列输出上仅使用地面真相训练初始检索,重读者和生成。我们在四个不同的任务中发现了很大的收益:零击插槽填充,问答,事实检查和对话,相对增长了9%至34%,比以前的苏格兰短裙排行榜上的最先前的排行榜相比。我们将代码作为开源提供,网址为https://github.com/ibm/kgi-slot-filling/tree/re2g。

As demonstrated by GPT-3 and T5, transformers grow in capability as parameter spaces become larger and larger. However, for tasks that require a large amount of knowledge, non-parametric memory allows models to grow dramatically with a sub-linear increase in computational cost and GPU memory requirements. Recent models such as RAG and REALM have introduced retrieval into conditional generation. These models incorporate neural initial retrieval from a corpus of passages. We build on this line of research, proposing Re2G, which combines both neural initial retrieval and reranking into a BART-based sequence-to-sequence generation. Our reranking approach also permits merging retrieval results from sources with incomparable scores, enabling an ensemble of BM25 and neural initial retrieval. To train our system end-to-end, we introduce a novel variation of knowledge distillation to train the initial retrieval, reranker, and generation using only ground truth on the target sequence output. We find large gains in four diverse tasks: zero-shot slot filling, question answering, fact-checking, and dialog, with relative gains of 9% to 34% over the previous state-of-the-art on the KILT leaderboard. We make our code available as open source at https://github.com/IBM/kgi-slot-filling/tree/re2g.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源