论文标题
从深度图中估算3D机器人姿势估算的半光谱脱钩热图
Semi-Perspective Decoupled Heatmaps for 3D Robot Pose Estimation from Depth Maps
论文作者
论文摘要
了解协作环境中工人和机器人的确切3D位置可以实现多种真实应用,例如检测不安全情况或用于统计和社会目的的相互作用研究。在本文中,我们提出了一个基于深度设备和深神经网络的非侵入性和光变变框架,以估算外部相机的3D机器人姿势。该方法可以应用于任何机器人,而无需硬件访问内部状态。我们介绍了预测的姿势的新颖代表,即半光谱脱钩的热图(SPDH),以准确计算世界坐标中的3D关节位置,以适应为2D人类姿势估计设计的有效的深层网络。提出的方法可以作为基于XYZ坐标的输入深度表示,可以在合成深度数据上进行训练,并应用于现实世界设置,而无需域名适应技术。为此,我们根据合成和真实深度图像介绍SIMBA数据集,并将其用于实验评估。结果表明,由特定的深度图表示和SPDH制成的建议方法克服了当前的最新状态。
Knowing the exact 3D location of workers and robots in a collaborative environment enables several real applications, such as the detection of unsafe situations or the study of mutual interactions for statistical and social purposes. In this paper, we propose a non-invasive and light-invariant framework based on depth devices and deep neural networks to estimate the 3D pose of robots from an external camera. The method can be applied to any robot without requiring hardware access to the internal states. We introduce a novel representation of the predicted pose, namely Semi-Perspective Decoupled Heatmaps (SPDH), to accurately compute 3D joint locations in world coordinates adapting efficient deep networks designed for the 2D Human Pose Estimation. The proposed approach, which takes as input a depth representation based on XYZ coordinates, can be trained on synthetic depth data and applied to real-world settings without the need for domain adaptation techniques. To this end, we present the SimBa dataset, based on both synthetic and real depth images, and use it for the experimental evaluation. Results show that the proposed approach, made of a specific depth map representation and the SPDH, overcomes the current state of the art.