论文标题

潜在图像动画师:通过潜在空间导航学习动画图像

Latent Image Animator: Learning to Animate Images via Latent Space Navigation

论文作者

Wang, Yaohui, Yang, Di, Bremond, Francois, Dantcheva, Antitza

论文摘要

由于深层生成模型的显着发展,动画图像变得越来越有效,而相关的结果变得越来越现实。当前的动画示意通常利用从驾驶视频中提取的结构表示。这种结构表示有助于从驱动视频转移到静止图像。但是,如果源图像和驱动视频涵盖较大的外观变化,则此类方法失败了。此外,结构信息的提取需要其他模块,以增强动画模型的复杂性。与此类模型偏离,我们在这里介绍了潜在图像动画师(LIA),这是一种自我监管的自动编码器,逃避了结构表示的需求。通过在潜在空间中的线性导航来简化LIA来使图像动画。具体而言,生成的视频中的运动是通过潜在空间中代码的线性位移来构建的。为此,我们同时学习一组正交运动方向,并使用其线性组合,以表示潜在空间中的任何位移。广泛的定量和定性分析表明,我们的模型在Voxceleb,Taichi和Ted-Talk数据集W.R.T.上有系统地和显着优于最先进的方法。产生的质量。

Due to the remarkable progress of deep generative models, animating images has become increasingly efficient, whereas associated results have become increasingly realistic. Current animation-approaches commonly exploit structure representation extracted from driving videos. Such structure representation is instrumental in transferring motion from driving videos to still images. However, such approaches fail in case the source image and driving video encompass large appearance variation. Moreover, the extraction of structure information requires additional modules that endow the animation-model with increased complexity. Deviating from such models, we here introduce the Latent Image Animator (LIA), a self-supervised autoencoder that evades need for structure representation. LIA is streamlined to animate images by linear navigation in the latent space. Specifically, motion in generated video is constructed by linear displacement of codes in the latent space. Towards this, we learn a set of orthogonal motion directions simultaneously, and use their linear combination, in order to represent any displacement in the latent space. Extensive quantitative and qualitative analysis suggests that our model systematically and significantly outperforms state-of-art methods on VoxCeleb, Taichi and TED-talk datasets w.r.t. generated quality.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源