论文标题

FV2ES:完全结束2端的多模式系统,用于快速而有效的视频识别推理

FV2ES: A Fully End2End Multimodal System for Fast Yet Effective Video Emotion Recognition Inference

论文作者

Wei, Qinglan, Huang, Xuling, Zhang, Yuan

论文摘要

在最新的社交网络中,越来越多的人更喜欢通过文字,言语和丰富的面部表情在视频中表达自己的情绪。多模式的视频情感分析技术可以根据图像中的人类表情和手势,声音和公认的自然语言自动理解用户的内部世界。但是,在现有研究中,与视觉和文本方式相比,声学方式长期以来一直处于边缘位置。也就是说,改善声学方式对整个多模式识别任务的贡献往往更加困难。此外,尽管可以通过引入常见的深度学习方法来获得更好的性能,但是这些训练模型的复杂结构始终会导致推理效率较低,尤其是在暴露于高分辨率和长长视频时。此外,缺乏完全端到端的多模式视频情感识别系统阻碍了其应用。在本文中,我们为快速而有效的识别推断设计了一个完全多模式的视频到情感系统(命名为FV2ES),其好处是三重的:(1)在声谱中,通过声道界的层次结构注意方法通过声道的有限贡献,并且在声学的贡献有限的贡献中,并超越了现有模型在IEMOCAP和CMOSE DATDATAD DATDADID上的现有模型均超过现有模型的性能; (2)引入视觉提取的多尺度的想法,而单一用于推理的想法会带来更高的效率,并同时保持预测准确性; (3)将预处理数据的数据进一步集成到对齐的多模式学习模型中,可以大大降低计算成本和存储空间。

In the latest social networks, more and more people prefer to express their emotions in videos through text, speech, and rich facial expressions. Multimodal video emotion analysis techniques can help understand users' inner world automatically based on human expressions and gestures in images, tones in voices, and recognized natural language. However, in the existing research, the acoustic modality has long been in a marginal position as compared to visual and textual modalities. That is, it tends to be more difficult to improve the contribution of the acoustic modality for the whole multimodal emotion recognition task. Besides, although better performance can be obtained by introducing common deep learning methods, the complex structures of these training models always result in low inference efficiency, especially when exposed to high-resolution and long-length videos. Moreover, the lack of a fully end-to-end multimodal video emotion recognition system hinders its application. In this paper, we designed a fully multimodal video-to-emotion system (named FV2ES) for fast yet effective recognition inference, whose benefits are threefold: (1) The adoption of the hierarchical attention method upon the sound spectra breaks through the limited contribution of the acoustic modality and outperforms the existing models' performance on both IEMOCAP and CMU-MOSEI datasets; (2) the introduction of the idea of multi-scale for visual extraction while single-branch for inference brings higher efficiency and maintains the prediction accuracy at the same time; (3) the further integration of data pre-processing into the aligned multimodal learning model allows the significant reduction of computational costs and storage space.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源