论文标题
用预审计的语言模型进行无监督的释义
Unsupervised Paraphrasing with Pretrained Language Models
论文作者
论文摘要
释义生成从最新的培训目标和模型体系结构的设计进展中受益匪浅。但是,以前的探索主要集中在监督的方法上,这些方法需要大量的标记数据,这些数据是昂贵的。为了解决这一缺点,我们采用了一种转移学习方法,并提出了一条培训管道,该管道使预训练的语言模型能够在无监督的环境中生成高质量的解释。我们的食谱包括任务适应,自学和一种名为Dynamic Blocking(DB)的新颖解码算法。为了使表面形式与输入不同,每当语言模型发出源序列中包含的令牌时,DB都会防止模型输出下一代步骤的后续源代币。我们通过自动评估表明,我们的方法在Quora问题对(QQP)和Paranmt数据集上都能达到最先进的性能,并且在两个不同分布的两个数据集之间具有强大的范围。我们还证明,我们的模型转移到其他语言中释义,而无需任何其他填充。
Paraphrase generation has benefited extensively from recent progress in the designing of training objectives and model architectures. However, previous explorations have largely focused on supervised methods, which require a large amount of labeled data that is costly to collect. To address this drawback, we adopt a transfer learning approach and propose a training pipeline that enables pre-trained language models to generate high-quality paraphrases in an unsupervised setting. Our recipe consists of task-adaptation, self-supervision, and a novel decoding algorithm named Dynamic Blocking (DB). To enforce a surface form dissimilar from the input, whenever the language model emits a token contained in the source sequence, DB prevents the model from outputting the subsequent source token for the next generation step. We show with automatic and human evaluations that our approach achieves state-of-the-art performance on both the Quora Question Pair (QQP) and the ParaNMT datasets and is robust to domain shift between the two datasets of distinct distributions. We also demonstrate that our model transfers to paraphrasing in other languages without any additional finetuning.