论文标题
在上下文嵌入手语的嵌入中包括面部表情
Including Facial Expressions in Contextual Embeddings for Sign Language Generation
论文作者
论文摘要
最先进的手语产生框架缺乏表达和自然性,这是仅关注手动标志的结果,忽略了面部表情的情感,语法和语义功能。这项工作的目的是通过扎根的面部表情来增强手语的语义表示。我们研究了建模文本,光泽和面部表情之间关系对符号生成系统性能的影响。特别是,我们提出了一个双重编码器变压器,能够通过捕获文本和符号光泽注释中发现的相似性和差异来生成手动符号以及面部表情。我们考虑到面部肌肉活动的作用是第一个在手语中使用面部动作单位来表达手动迹象的强度。我们执行一系列实验,表明我们提出的模型可以提高自动生成的手语的质量。
State-of-the-art sign language generation frameworks lack expressivity and naturalness which is the result of only focusing manual signs, neglecting the affective, grammatical and semantic functions of facial expressions. The purpose of this work is to augment semantic representation of sign language through grounding facial expressions. We study the effect of modeling the relationship between text, gloss, and facial expressions on the performance of the sign generation systems. In particular, we propose a Dual Encoder Transformer able to generate manual signs as well as facial expressions by capturing the similarities and differences found in text and sign gloss annotation. We take into consideration the role of facial muscle activity to express intensities of manual signs by being the first to employ facial action units in sign language generation. We perform a series of experiments showing that our proposed model improves the quality of automatically generated sign language.