论文标题
从图像和视频中学习完整的3D可变形面模型
Learning Complete 3D Morphable Face Models from Images and Videos
论文作者
论文摘要
大多数3D面部重建方法都依赖于3D形态的模型,这些模型将面部变形的空间分解为身份几何,表达和皮肤反射率。这些模型通常是从有限数量的3D扫描中学到的,因此不能很好地概括在不同的身份和表达式上。我们提出了仅从图像和视频中学习面部身份几何形状,反照率和表达的完整3D模型的第一种方法。几乎无限的此类数据收集,结合我们基于学习的学习方法,可以进行学习面对模型,这些模型可以超越现有方法的跨度。我们的网络设计和损耗功能可确保不仅对身份和反照率进行分离的参数化,而且还确保了第一次表达式基础。我们的方法还允许在测试时进行野外单眼重建。我们表明,与现有方法相比,学到的模型可以更好地概括和导致基于图像的质量重建更高。
Most 3D face reconstruction methods rely on 3D morphable models, which disentangle the space of facial deformations into identity geometry, expressions and skin reflectance. These models are typically learned from a limited number of 3D scans and thus do not generalize well across different identities and expressions. We present the first approach to learn complete 3D models of face identity geometry, albedo and expression just from images and videos. The virtually endless collection of such data, in combination with our self-supervised learning-based approach allows for learning face models that generalize beyond the span of existing approaches. Our network design and loss functions ensure a disentangled parameterization of not only identity and albedo, but also, for the first time, an expression basis. Our method also allows for in-the-wild monocular reconstruction at test time. We show that our learned models better generalize and lead to higher quality image-based reconstructions than existing approaches.