论文标题
无条件头部运动的自我回旋gan
Autoregressive GAN for Semantic Unconditional Head Motion Generation
论文作者
论文摘要
在这项工作中,我们解决了无条件的头部运动生成的任务,以使人的面孔在单个参考姿势中在低维语义空间中为人类的面孔进行动画。与传统的音频条件说话的脑电期一代不同,很少强调逼真的头部动作,我们设计了一种基于gan的架构,该体系结构学会学会在较长持续时间内综合丰富的头部运动序列,同时保持较低的误差积累水平,尤其是在保持自动绩效的产量时,增量的自动化产生可确保较低的较低的较低的限制,同时越来越高较高的较高的较高的范围,同时越来越高的较高的范围,以较高的范围来构成越来越大的范围。信号和较少的模式崩溃。我们在实验上证明了所提出的方法的相关性,并与在相似任务上实现最新性能的模型相比表现出了优越性。
In this work, we address the task of unconditional head motion generation to animate still human faces in a low-dimensional semantic space from a single reference pose. Different from traditional audio-conditioned talking head generation that seldom puts emphasis on realistic head motions, we devise a GAN-based architecture that learns to synthesize rich head motion sequences over long duration while maintaining low error accumulation levels.In particular, the autoregressive generation of incremental outputs ensures smooth trajectories, while a multi-scale discriminator on input pairs drives generation toward better handling of high- and low-frequency signals and less mode collapse.We experimentally demonstrate the relevance of the proposed method and show its superiority compared to models that attained state-of-the-art performances on similar tasks.