论文标题

学习概括更多:神经机器翻译的连续语义扩展

Learning to Generalize to More: Continuous Semantic Augmentation for Neural Machine Translation

论文作者

Wei, Xiangpeng, Yu, Heng, Hu, Yue, Weng, Rongxiang, Luo, Weihua, Xie, Jun, Jin, Rong

论文摘要

监督神经机译(NMT)的主要任务是学习从一组并行句子对中生成以源输入为条件的目标句子,因此产生了一个能够概括到看不见实例的模型。但是,通常观察到该模型的概括性能受到训练中使用的平行数据的高度影响。尽管数据增强被广泛用于丰富培训数据,但具有离散操作的常规方法无法生成多样化和忠实的培训样本。在本文中,我们提出了一种新的数据增强范式,称为连续语义增强(CSANMT),该范式通过相同含义以相同含义涵盖足够的字面表达变体增强每个训练实例。我们对涉及各种语言对的丰富资源和低资源设置进行了广泛的实验,包括WMT14英语 - {German,French},Nist中文英语和多个低资源IWSLT翻译任务。提供的经验证据表明,CSANMT在现有的增强技术之间设定了新的性能水平,从而通过大幅度提高了最先进的水平。核心代码包含在附录E中。

The principal task in supervised neural machine translation (NMT) is to learn to generate target sentences conditioned on the source inputs from a set of parallel sentence pairs, and thus produce a model capable of generalizing to unseen instances. However, it is commonly observed that the generalization performance of the model is highly influenced by the amount of parallel data used in training. Although data augmentation is widely used to enrich the training data, conventional methods with discrete manipulations fail to generate diverse and faithful training samples. In this paper, we present a novel data augmentation paradigm termed Continuous Semantic Augmentation (CsaNMT), which augments each training instance with an adjacency semantic region that could cover adequate variants of literal expression under the same meaning. We conduct extensive experiments on both rich-resource and low-resource settings involving various language pairs, including WMT14 English-{German,French}, NIST Chinese-English and multiple low-resource IWSLT translation tasks. The provided empirical evidences show that CsaNMT sets a new level of performance among existing augmentation techniques, improving on the state-of-the-art by a large margin. The core codes are contained in Appendix E.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源