论文标题

琐事:来自单个RGB图像的联系,人与物体重建

CHORE: Contact, Human and Object REconstruction from a single RGB image

论文作者

Xie, Xianghui, Bhatnagar, Bharat Lal, Pons-Moll, Gerard

论文摘要

大多数先前的作品从图像中感知3D人类的作品是孤立的,而没有周围的环境。但是,人类一直在与周围的对象进行互动,因此呼吁模型不仅可以推理人类,而且还可以推理对象及其相互作用。由于人类与物体之间的严重阻塞,不同的相互作用类型和深度歧义,问题极具挑战性。在本文中,我们介绍了一种新颖的方法,该方法学会了从单个RGB图像中共同重建人类和对象。乔尔从最近的隐性表面学习和基于古典模型的拟合方面的进步中汲取灵感。我们计算人类的神经重建和物体的神经重建,该神经用两个无符号距离字段隐式表示,一个与参数体的对应场和对象姿势场。这使我们能够在有关交互作用的同时,可牢固地拟合参数的身体模型和3D对象模板。此外,先前与像素的隐式学习方法使用合成数据并做出实际数据中未满足的假设。我们提出了一种优雅的深度缩放,可以在真实数据上进行更有效的形状学习。实验表明,我们的联合重建通过提出的策略学到了明显优于SOTA。我们的代码和型号可在https://virtualhumans.mpi-inf.mpg.de/chore上找到

Most prior works in perceiving 3D humans from images reason human in isolation without their surroundings. However, humans are constantly interacting with the surrounding objects, thus calling for models that can reason about not only the human but also the object and their interaction. The problem is extremely challenging due to heavy occlusions between humans and objects, diverse interaction types and depth ambiguity. In this paper, we introduce CHORE, a novel method that learns to jointly reconstruct the human and the object from a single RGB image. CHORE takes inspiration from recent advances in implicit surface learning and classical model-based fitting. We compute a neural reconstruction of human and object represented implicitly with two unsigned distance fields, a correspondence field to a parametric body and an object pose field. This allows us to robustly fit a parametric body model and a 3D object template, while reasoning about interactions. Furthermore, prior pixel-aligned implicit learning methods use synthetic data and make assumptions that are not met in the real data. We propose a elegant depth-aware scaling that allows more efficient shape learning on real data. Experiments show that our joint reconstruction learned with the proposed strategy significantly outperforms the SOTA. Our code and models are available at https://virtualhumans.mpi-inf.mpg.de/chore

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源