论文标题
Image2Gif:通过扭曲节点生成连续逼真的动画
Image2Gif: Generating Continuous Realistic Animations with Warping NODEs
论文作者
论文摘要
从有限数量的顺序观察中生成平滑的动画具有许多视觉应用程序。例如,它可用于每秒增加帧数,或仅基于第一帧和最后一个帧生成新轨迹,例如面部情绪的动作。尽管有离散的观察到的数据(帧),但生成新轨迹的问题仍然是一个继续问题。另外,要在感知上是现实的,图像的域不应通过变化的轨迹彻底改变。在本文中,我们提出了一个新的框架,即翘曲的神经颂歌,以连续地生成平滑的动画(视频框架插值),给定两个(“远距离分开”)框架,表示动画的开始和结束。我们框架的关键特征是基于矢量字段的图像的连续空间转换,该矢量场是从微分方程系统中得出的。这使我们能够实现动画的平稳性和现实主义,并在框架之间无限的时间步骤。我们在不同的培训设置中,在包括生成对抗网络(GAN)和$ L_2 $损失的不同培训设置中,显示了工作中的应用在生成动画中的应用。
Generating smooth animations from a limited number of sequential observations has a number of applications in vision. For example, it can be used to increase number of frames per second, or generating a new trajectory only based on first and last frames, e.g. a motion of face emotions. Despite the discrete observed data (frames), the problem of generating a new trajectory is a continues problem. In addition, to be perceptually realistic, the domain of an image should not alter drastically through the trajectory of changes. In this paper, we propose a new framework, Warping Neural ODE, for generating a smooth animation (video frame interpolation) in a continuous manner, given two ("farther apart") frames, denoting the start and the end of the animation. The key feature of our framework is utilizing the continuous spatial transformation of the image based on the vector field, derived from a system of differential equations. This allows us to achieve the smoothness and the realism of an animation with infinitely small time steps between the frames. We show the application of our work in generating an animation given two frames, in different training settings, including Generative Adversarial Network (GAN) and with $L_2$ loss.