论文标题

PITA:从单个RGB-D视频序列中学习个性化的隐式神经头像

PINA: Learning a Personalized Implicit Neural Avatar from a Single RGB-D Video Sequence

论文作者

Dong, Zijian, Guo, Chen, Song, Jie, Chen, Xu, Geiger, Andreas, Hilliges, Otmar

论文摘要

我们提出了一种新的方法,可以从短RGB-D序列中学习个性化的隐式神经化身(PITA)。这使非专家用户可以创建自己的详细和个性化的虚拟副本,可以通过逼真的服装变形来动画。 PITA不需要完整的扫描,也不需要事先从大型衣服的人类数据集中学到的事先学习。在这种情况下学习完整的头像是具有挑战性的,因为只有很少的深度观测值可用,这些观察值嘈杂且不完整(即每个框架的身体可见性仅部分可见)。我们提出了一种通过姿势条件的隐式表面和变形场来学习形状和非刚性变形的方法,该图在规范空间中定义。这使我们能够将所有部分观察结果融合到单个一致的规范表示中。在姿势,形状和皮肤参数上,融合被称为全球优化问题。该方法可以从真实的RGB-D序列中学习神经化身,以了解各种各样的人员和服装样式,并且这些化身可以在看不见的运动序列中被动画。

We present a novel method to learn Personalized Implicit Neural Avatars (PINA) from a short RGB-D sequence. This allows non-expert users to create a detailed and personalized virtual copy of themselves, which can be animated with realistic clothing deformations. PINA does not require complete scans, nor does it require a prior learned from large datasets of clothed humans. Learning a complete avatar in this setting is challenging, since only few depth observations are available, which are noisy and incomplete (i.e. only partial visibility of the body per frame). We propose a method to learn the shape and non-rigid deformations via a pose-conditioned implicit surface and a deformation field, defined in canonical space. This allows us to fuse all partial observations into a single consistent canonical representation. Fusion is formulated as a global optimization problem over the pose, shape and skinning parameters. The method can learn neural avatars from real noisy RGB-D sequences for a diverse set of people and clothing styles and these avatars can be animated given unseen motion sequences.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源