论文标题

唇部区域是否足以用于唇读?

Is Lip Region-of-Interest Sufficient for Lipreading?

论文作者

Zhang, Jing-Xuan, Wan, Gen-Shun, Pan, Jia

论文摘要

唇部区域(ROI)通常用于唇部读取任务中的视觉输入。很少有作品将整个面孔作为视觉输入,因为通常认为脸部的唇部分离部分是多余的,并且与视觉语音识别无关。但是,面孔比嘴唇包含的详细信息,例如扬声器的头部姿势,情感,身份等。我们认为,如果对使用整个面部的强大功能提取器进行了训练,则此类信息可能会受益于视觉语音识别。在这项工作中,我们建议采用整个面孔进行唇读,并通过自我监督的学习。 Av-Hubert是一种视听多式联运的自学学习框架,在我们的实验中采用了。我们的实验结果表明,与使用LIP作为视觉输入的基线方法相比,采用整个面部的相对单词错误率(WER)降低了16%。如果没有自我监督的预处理,则具有面部输入的模型比在培训数据有限的情况下(30小时)使用唇部输入的模型更高,而使用大量培训数据(433小时)时,则获得了较低的WER。

Lip region-of-interest (ROI) is conventionally used for visual input in the lipreading task. Few works have adopted the entire face as visual input because lip-excluded parts of the face are usually considered to be redundant and irrelevant to visual speech recognition. However, faces contain much more detailed information than lips, such as speakers' head pose, emotion, identity etc. We argue that such information might benefit visual speech recognition if a powerful feature extractor employing the entire face is trained. In this work, we propose to adopt the entire face for lipreading with self-supervised learning. AV-HuBERT, an audio-visual multi-modal self-supervised learning framework, was adopted in our experiments. Our experimental results showed that adopting the entire face achieved 16% relative word error rate (WER) reduction on the lipreading task, compared with the baseline method using lip as visual input. Without self-supervised pretraining, the model with face input achieved a higher WER than that using lip input in the case of limited training data (30 hours), while a slightly lower WER when using large amount of training data (433 hours).

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源