论文标题

回到未来:联合意识到的时间深度学习3D人姿势估计

Back to the Future: Joint Aware Temporal Deep Learning 3D Human Pose Estimation

论文作者

Gupta, Vikas

论文摘要

我们提出了一个新的深度学习网络,该网络引入了更深层次的CNN通道过滤器和约束,以减少3D视频人体姿势估计的关节位置和运动误差。我们的模型比基于平均每关节位置误差,速度误差以及人类的加速度误差的文献的最佳结果优于先前的最佳结果,这与所有协议和运动指标中新的最新平均误差相对应。与文献的最佳结果相比,平均每个关节误差降低1%,速度误差降低7%,加速度降低13%。我们的贡献提高了视频中的位置准确性和运动平滑度可以与未来的端到端网络集成,而不会增加网络复杂性。我们的模型和代码可在https://vnmr.github.io/上找到。 关键字:3D,人,图像,姿势,动作,检测,对象,视频,视觉,监督,关节,运动学

We propose a new deep learning network that introduces a deeper CNN channel filter and constraints as losses to reduce joint position and motion errors for 3D video human body pose estimation. Our model outperforms the previous best result from the literature based on mean per-joint position error, velocity error, and acceleration errors on the Human 3.6M benchmark corresponding to a new state-of-the-art mean error reduction in all protocols and motion metrics. Mean per joint error is reduced by 1%, velocity error by 7% and acceleration by 13% compared to the best results from the literature. Our contribution increasing positional accuracy and motion smoothness in video can be integrated with future end to end networks without increasing network complexity. Our model and code are available at https://vnmr.github.io/ Keywords: 3D, human, image, pose, action, detection, object, video, visual, supervised, joint, kinematic

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源