论文标题

一个简单的多模式转移学习基线,用于手语翻译

A Simple Multi-Modality Transfer Learning Baseline for Sign Language Translation

论文作者

Chen, Yutong, Wei, Fangyun, Sun, Xiao, Wu, Zhirong, Lin, Stephen

论文摘要

本文提出了一个简单的转移学习基线,用于手语翻译。现有的手语数据集(例如Phoenix-2014t,CSL,每天CSL)仅包含约10k-20k的符号视频,光泽注释和文本,它们的数量级比训练口语翻译模型的典型并行数据小。因此,数据是用于培训有效手语翻译模型的瓶颈。为了减轻此问题,我们建议从一般域数据集中逐步预处理该模型,其中包括大量外部监督到域内数据集。具体而言,我们在人类动作的一般领域和标志性数据集的内域中预先迹象表明视觉网络,并在多语种语料库的一般领域和gloss-to-Text Corpus的域内的一般域上的光泽到文本翻译网络预处理。关节模型通过一个名为“视觉语言”映射器的其他模块进行微调,该模块连接了两个网络。这个简单的基线超过了两个手语翻译基准的先前最新结果,这证明了转移学习的有效性。凭借其简单性和强大的性能,这种方法可以作为未来研究的坚实基准。代码和型号可在以下网址提供:https://github.com/fangyunwei/slrt。

This paper proposes a simple transfer learning baseline for sign language translation. Existing sign language datasets (e.g. PHOENIX-2014T, CSL-Daily) contain only about 10K-20K pairs of sign videos, gloss annotations and texts, which are an order of magnitude smaller than typical parallel data for training spoken language translation models. Data is thus a bottleneck for training effective sign language translation models. To mitigate this problem, we propose to progressively pretrain the model from general-domain datasets that include a large amount of external supervision to within-domain datasets. Concretely, we pretrain the sign-to-gloss visual network on the general domain of human actions and the within-domain of a sign-to-gloss dataset, and pretrain the gloss-to-text translation network on the general domain of a multilingual corpus and the within-domain of a gloss-to-text corpus. The joint model is fine-tuned with an additional module named the visual-language mapper that connects the two networks. This simple baseline surpasses the previous state-of-the-art results on two sign language translation benchmarks, demonstrating the effectiveness of transfer learning. With its simplicity and strong performance, this approach can serve as a solid baseline for future research. Code and models are available at: https://github.com/FangyunWei/SLRT.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源