论文标题
全身3D人网估计的准确3D手姿势估计
Accurate 3D Hand Pose Estimation for Whole-Body 3D Human Mesh Estimation
论文作者
论文摘要
全身3D人网估计旨在同时重建3D人体,手和脸。尽管已经提出了几种方法,但由于两个原因,由3D手腕和手指组成的3D手的准确预测仍然具有挑战性。首先,在预测3D手腕时,尚未仔细考虑人类运动链。其次,先前的作品利用了3D手指的身体功能,其中身体功能几乎没有手指信息。为了解决限制,我们提出了Hand4Whole,这比以前的作品有两个优点。首先,我们设计了Pose2pose,该模块利用了3D关节旋转的关节特征。使用pose2pose,Hand4Thole利用手MCP关节特征预测3D手腕,因为MCP接头在很大程度上有助于人类运动链中的3D腕旋转。其次,在预测3D指的旋转时,Hand4整个丢弃身体功能。我们的Hand4The以端到端的方式进行了训练,并产生比以前的全身3D人类网格估计方法更好的3D手结果。这些代码可在https://github.com/mks0601/hand4whole_release上找到。
Whole-body 3D human mesh estimation aims to reconstruct the 3D human body, hands, and face simultaneously. Although several methods have been proposed, accurate prediction of 3D hands, which consist of 3D wrist and fingers, still remains challenging due to two reasons. First, the human kinematic chain has not been carefully considered when predicting the 3D wrists. Second, previous works utilize body features for the 3D fingers, where the body feature barely contains finger information. To resolve the limitations, we present Hand4Whole, which has two strong points over previous works. First, we design Pose2Pose, a module that utilizes joint features for 3D joint rotations. Using Pose2Pose, Hand4Whole utilizes hand MCP joint features to predict 3D wrists as MCP joints largely contribute to 3D wrist rotations in the human kinematic chain. Second, Hand4Whole discards the body feature when predicting 3D finger rotations. Our Hand4Whole is trained in an end-to-end manner and produces much better 3D hand results than previous whole-body 3D human mesh estimation methods. The codes are available here at https://github.com/mks0601/Hand4Whole_RELEASE.