论文标题
在神经机器翻译中充分利用上下文
Towards Making the Most of Context in Neural Machine Translation
论文作者
论文摘要
文档级别的机器翻译设法以较小的利润率优于句子级别的模型,但未能被广泛采用。我们认为,以前的研究并未清楚地使用全球上下文,并提出了一个新的文档级NMT框架,该框架故意以源和目标语言对文档的全局上下文的意识故意对每个句子的本地上下文进行建模。我们专门设计该模型,以便能够处理包含任何数量句子(包括单个句子)的文档。这种统一的方法允许我们的模型在标准数据集上优雅地训练,而无需分别对句子和文档级别的数据进行培训。实验结果表明,我们的模型优于变压器基线和先前的文档级NMT模型,其在最先进的基准线上的大幅度最高为2.1 BLEU。我们还提供了分析,这些分析表明了上下文的好处,远远超出了相邻的两个或三个句子,这是以前的研究通常已纳入的。
Document-level machine translation manages to outperform sentence level models by a small margin, but have failed to be widely adopted. We argue that previous research did not make a clear use of the global context, and propose a new document-level NMT framework that deliberately models the local context of each sentence with the awareness of the global context of the document in both source and target languages. We specifically design the model to be able to deal with documents containing any number of sentences, including single sentences. This unified approach allows our model to be trained elegantly on standard datasets without needing to train on sentence and document level data separately. Experimental results demonstrate that our model outperforms Transformer baselines and previous document-level NMT models with substantial margins of up to 2.1 BLEU on state-of-the-art baselines. We also provide analyses which show the benefit of context far beyond the neighboring two or three sentences, which previous studies have typically incorporated.