论文标题

3D相互作用的手姿势通过手部去除和去除

3D Interacting Hand Pose Estimation by Hand De-occlusion and Removal

论文作者

Meng, Hao, Jin, Sheng, Liu, Wentao, Qian, Chen, Lin, Mengxiang, Ouyang, Wanli, Luo, Ping

论文摘要

从单个RGB图像中估算3D相互作用的手姿势对于理解人类行为至关重要。与以前的大多数直接预测两个相互作用手的3D姿势的作品不同,我们建议分解具有挑战性的相互作用姿势估计任务并分别估算每只手的姿势。通过这种方式,可以直接利用单手姿势估计系统的最新研究进度。但是,由于(1)强烈的手部阻塞和(2)由手的均匀外观引起的,手动姿势估计在相互作用的情况下非常具有挑战性。为了应对这两个挑战,我们提出了一种新型的手部除外(HDR)框架,以执行手部的去钉和干扰物的去除。我们还提出了第一个称为Amodal Extredhand数据集(AIH)的大规模合成Amodal手数据集,以促进模型培训并促进相关研究的发展。实验表明,所提出的方法显着优于先前的最新相互作用姿势估计方法。代码和数据可在https://github.com/menghao666/hdr上找到。

Estimating 3D interacting hand pose from a single RGB image is essential for understanding human actions. Unlike most previous works that directly predict the 3D poses of two interacting hands simultaneously, we propose to decompose the challenging interacting hand pose estimation task and estimate the pose of each hand separately. In this way, it is straightforward to take advantage of the latest research progress on the single-hand pose estimation system. However, hand pose estimation in interacting scenarios is very challenging, due to (1) severe hand-hand occlusion and (2) ambiguity caused by the homogeneous appearance of hands. To tackle these two challenges, we propose a novel Hand De-occlusion and Removal (HDR) framework to perform hand de-occlusion and distractor removal. We also propose the first large-scale synthetic amodal hand dataset, termed Amodal InterHand Dataset (AIH), to facilitate model training and promote the development of the related research. Experiments show that the proposed method significantly outperforms previous state-of-the-art interacting hand pose estimation approaches. Codes and data are available at https://github.com/MengHao666/HDR.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源