论文标题

脸式:从混合数据集中的细粒度和细节可控制的3D面可变形模型

FaceVerse: a Fine-grained and Detail-controllable 3D Face Morphable Model from a Hybrid Dataset

论文作者

Wang, Lizhen, Chen, Zhiyuan, Yu, Tao, Ma, Chenguang, Li, Liang, Liu, Yebin

论文摘要

我们展示Faceverse是一种细粒的3D神经面模型,该模型是由包含60K融合RGB-D图像和2K高效率3D头部扫描模型的Hybrid East Face数据集构建的。提出了一种新颖的粗到精细结构,以更好地利用我们的混合数据集。在粗模块中,我们从大规模的RGB-D图像中生成了一个基本参数模型,该模型能够预测不同性别,年龄等的精确粗糙的3D面部模型。然后在细胞模块中,对有条件的Stylegan架构进行了训练,该体系结构接受了高保真扫描模型的引入,以丰富精心设计的面部几何和纹理和纹理的详细信息。请注意,与以前的方法不同,我们的基本和详细模块都可以更改,这可以使创新的应用调整3D面模型的基本属性和面部细节。此外,我们提出了一个基于可区分渲染的单像拟合框架。丰富的实验表明,我们的方法的表现优于最新方法。

We present FaceVerse, a fine-grained 3D Neural Face Model, which is built from hybrid East Asian face datasets containing 60K fused RGB-D images and 2K high-fidelity 3D head scan models. A novel coarse-to-fine structure is proposed to take better advantage of our hybrid dataset. In the coarse module, we generate a base parametric model from large-scale RGB-D images, which is able to predict accurate rough 3D face models in different genders, ages, etc. Then in the fine module, a conditional StyleGAN architecture trained with high-fidelity scan models is introduced to enrich elaborate facial geometric and texture details. Note that different from previous methods, our base and detailed modules are both changeable, which enables an innovative application of adjusting both the basic attributes and the facial details of 3D face models. Furthermore, we propose a single-image fitting framework based on differentiable rendering. Rich experiments show that our method outperforms the state-of-the-art methods.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源