论文标题

面部动作单元检测中的多标签关系建模

Multi-label Relation Modeling in Facial Action Units Detection

论文作者

Ji, Xianpeng, Ding, Yu, Li, Lincheng, Chen, Yu, Fan, Changjie

论文摘要

本文描述了一种面部动作单元检测的方法。所涉及的动作单元(AU)包括AU1(内部眉毛升高),AU2(外额促进剂),AU4(眉毛下部),AU6(脸颊升高),Au12(唇角拉杆),AU15(Lip Corner downser),Au2(Lip Corner Debressor),Au20(Lip tenster),和Au25(Lip Perter)。我们的工作依赖于FG-2020竞赛发布的数据集:野外行为分析(ABAW)。提出的方法包括数据预处理,特征提取和AU分类。数据预处理包括检测面部纹理和地标。纹理静态和具有里程碑意义的动态特征通过神经网络提取,然后融合为特征潜在表示。最后,将融合功能视为具有可训练的查找AU表的经常性神经网络的初始隐藏状态。 RNN的输出是AU分类的结果。检测到的准确性通过0.5 $ \ times $ $精度 + 0.5 $ \ times $ f1评估。我们的方法通过组织委员会指定的验证数据达到0.56。

This paper describes an approach to the facial action units detections. The involved action units (AU) include AU1 (Inner Brow Raiser), AU2 (Outer Brow Raiser), AU4 (Brow Lowerer), AU6 (Cheek Raise), AU12 (Lip Corner Puller), AU15 (Lip Corner Depressor), AU20 (Lip Stretcher), and AU25 (Lip Part). Our work relies on the dataset released by the FG-2020 Competition: Affective Behavior Analysis In-the-Wild (ABAW). The proposed method consists of the data preprocessing, the feature extraction and the AU classification. The data preprocessing includes the detection of face texture and landmarks. The texture static and landmark dynamic features are extracted through neural networks and then fused as the feature latent representation. Finally, the fused feature is taken as the initial hidden state of a recurrent neural network with a trainable lookup AU table. The output of the RNN is the results of AU classification. The detected accuracy is evaluated with 0.5$\times$accuracy + 0.5$\times$F1. Our method achieve 0.56 with the validation data that is specified by the organization committee.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源