论文标题
i3dmm:深层3D形态模型的人头模型
i3DMM: Deep Implicit 3D Morphable Model of Human Heads
论文作者
论文摘要
我们介绍了完整头部的第一个深层3D形态模型(I3DMM)。与早期的可变形脸模型不同,它不仅捕获了额叶脸的特定身份的几何形状,质地和表达式,而且还可以对整个头部(包括头发)进行建模。我们收集了一个新的数据集,该数据集由64个具有不同表达方式和发型的人组成,可以训练I3DMM。我们的方法具有以下有利的特性:(i)这是首个包括头发的全部型形型。 (ii)与基于网格的模型相反,它可以仅在不牢固地对准扫描的情况下进行培训,而无需难以进行非刚性注册。 (iii)我们设计了一种新颖的体系结构,将形状模型解散为隐式参考形状和该参考形状的变形。因此,可以隐式地学习形状之间的密集对应关系。 (iv)这种体系结构使我们能够在参考空间中学习颜色,从而在语义上解开几何和颜色组件。几何形状被进一步散布为身份,表达和发型,而颜色则被视为身份和发型组件。我们使用消融研究,与最先进的模型进行比较以及语义头编辑和纹理传输等应用来展示I3DMM的优点。我们将使我们的模型公开可用。
We present the first deep implicit 3D morphable model (i3DMM) of full heads. Unlike earlier morphable face models it not only captures identity-specific geometry, texture, and expressions of the frontal face, but also models the entire head, including hair. We collect a new dataset consisting of 64 people with different expressions and hairstyles to train i3DMM. Our approach has the following favorable properties: (i) It is the first full head morphable model that includes hair. (ii) In contrast to mesh-based models it can be trained on merely rigidly aligned scans, without requiring difficult non-rigid registration. (iii) We design a novel architecture to decouple the shape model into an implicit reference shape and a deformation of this reference shape. With that, dense correspondences between shapes can be learned implicitly. (iv) This architecture allows us to semantically disentangle the geometry and color components, as color is learned in the reference space. Geometry is further disentangled as identity, expressions, and hairstyle, while color is disentangled as identity and hairstyle components. We show the merits of i3DMM using ablation studies, comparisons to state-of-the-art models, and applications such as semantic head editing and texture transfer. We will make our model publicly available.