论文标题

NEMF:运动动画的神经运动场

NeMF: Neural Motion Fields for Kinematic Animation

论文作者

He, Chengan, Saito, Jun, Zachary, James, Rushmeier, Holly, Zhou, Yi

论文摘要

我们提出了一个隐式神经表示,以学习运动学运动的时空空间。与以前表示运动为离散顺序样本的工作不同,我们建议将庞大的运动空间表达为随着时间的推移连续函数,因此名称神经运动场(NEMF)。具体来说,我们使用神经网络来学习此功能的其他动作集,该动作旨在以时间坐标为$ t $的生成模型和用于控制样式的随机矢量$ z $。然后,将模型作为变异自动编码器(VAE)进行训练,并带有运动编码器来采样潜在空间。我们使用多样化的人类运动数据集和四倍的数据集训练我们的模型,以证明其多功能性,并最终将其作为通用运动部署在解决任务 - 静脉局部问题之前,并在不同的运动生成和编辑应用中表现出优势,例如运动插值,插值,in In-bet in bet withing和Re-navigating。可以在我们的项目页面上找到更多详细信息:https://cs.yale.edu/homes/che/projects/nemf/。

We present an implicit neural representation to learn the spatio-temporal space of kinematic motions. Unlike previous work that represents motion as discrete sequential samples, we propose to express the vast motion space as a continuous function over time, hence the name Neural Motion Fields (NeMF). Specifically, we use a neural network to learn this function for miscellaneous sets of motions, which is designed to be a generative model conditioned on a temporal coordinate $t$ and a random vector $z$ for controlling the style. The model is then trained as a Variational Autoencoder (VAE) with motion encoders to sample the latent space. We train our model with a diverse human motion dataset and quadruped dataset to prove its versatility, and finally deploy it as a generic motion prior to solve task-agnostic problems and show its superiority in different motion generation and editing applications, such as motion interpolation, in-betweening, and re-navigating. More details can be found on our project page: https://cs.yale.edu/homes/che/projects/nemf/.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源