论文标题

Horeenet:3D感知手动握把重演

HOReeNet: 3D-aware Hand-Object Grasping Reenactment

论文作者

Lee, Changhwa, Cha, Junuk, Lee, Hansol, Lee, Seongyeong, Kim, Donguk, Baek, Seungryul

论文摘要

我们介绍了Horeenet,该组织解决了操纵涉及手,对象及其相互作用的图像的新任务。特别是,我们有兴趣将源图像的对象转移到靶向图像并操纵3D手姿势以紧密掌握转移的对象。此外,操作需要反映在2D图像空间中。在我们的重演场景中,涉及手动相互作用的情况,3D重建变得至关重要,因为需要3D的接触推理才能实现紧密的掌握。同时,要从3D空间获得高质量的2D图像,需要精心设计的3D到2D投影和图像改进。我们的Horeenet是为此任务提出的第一个完全可区分的框架。在手动相互作用数据集上,我们将Horeenet与传统的图像翻译算法和重新制定算法进行了比较。我们证明了我们的方法可以实现拟议任务的最新方法。

We present HOReeNet, which tackles the novel task of manipulating images involving hands, objects, and their interactions. Especially, we are interested in transferring objects of source images to target images and manipulating 3D hand postures to tightly grasp the transferred objects. Furthermore, the manipulation needs to be reflected in the 2D image space. In our reenactment scenario involving hand-object interactions, 3D reconstruction becomes essential as 3D contact reasoning between hands and objects is required to achieve a tight grasp. At the same time, to obtain high-quality 2D images from 3D space, well-designed 3D-to-2D projection and image refinement are required. Our HOReeNet is the first fully differentiable framework proposed for such a task. On hand-object interaction datasets, we compared our HOReeNet to the conventional image translation algorithms and reenactment algorithm. We demonstrated that our approach could achieved the state-of-the-art on the proposed task.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源