论文标题
从单个图像中学习非参数人网格重建,而没有地面真相网状
Learning Nonparametric Human Mesh Reconstruction from a Single Image without Ground Truth Meshes
论文作者
论文摘要
非参数方法已显示出从单眼图像重建3D人网格的有希望的结果。与以前使用参数人类模型(如皮肤多人线性模型(SMPL))并尝试回归模型参数的方法不同,非参数方法放宽了对参数空间的巨大依赖。但是,现有的非参数方法需要地面真实网格作为每个顶点的回归目标,并且获得地面真相网状标签非常昂贵。在本文中,我们提出了一种新颖的方法来学习人类的网格重建而没有任何地面真理。通过将两个新术语引入图形卷积神经网络(Graph CNN)的损耗函数中,这是可能的。第一个学期是拉普拉斯(Laplacian)的先验,它是重建网格的常规化器。第二项是零件分割损失,迫使重建网格的投影区域与零件分割相匹配。多个公共数据集的实验结果表明,如果不使用3D地面真相网格,拟议的方法的表现优于先前需要地面真相网格的先前最新方法。
Nonparametric approaches have shown promising results on reconstructing 3D human mesh from a single monocular image. Unlike previous approaches that use a parametric human model like skinned multi-person linear model (SMPL), and attempt to regress the model parameters, nonparametric approaches relax the heavy reliance on the parametric space. However, existing nonparametric methods require ground truth meshes as their regression target for each vertex, and obtaining ground truth mesh labels is very expensive. In this paper, we propose a novel approach to learn human mesh reconstruction without any ground truth meshes. This is made possible by introducing two new terms into the loss function of a graph convolutional neural network (Graph CNN). The first term is the Laplacian prior that acts as a regularizer on the reconstructed mesh. The second term is the part segmentation loss that forces the projected region of the reconstructed mesh to match the part segmentation. Experimental results on multiple public datasets show that without using 3D ground truth meshes, the proposed approach outperforms the previous state-of-the-art approaches that require ground truth meshes for training.