论文标题

I^2R-NET:用于多人姿势估计的内部和人际内关系网络

I^2R-Net: Intra- and Inter-Human Relation Network for Multi-Person Pose Estimation

论文作者

Ding, Yiwei, Deng, Wenjin, Zheng, Yinglin, Liu, Pengfei, Wang, Meihong, Cheng, Xuan, Bao, Jianmin, Chen, Dong, Zeng, Ming

论文摘要

在本文中,我们介绍了人际内和人际关系网络(I^2R-NET),以进行多人姿势估计。它涉及两个基本模块。首先,人类内部关系模块在一个人身上运行,旨在捕获人类内的依赖性。其次,人间关系模块考虑了多个实例之间的关系,并着重于捕获人间相互作用。人际关系间的关系模块可以通过减少特征图的分辨率来设计非常轻巧,但学习有用的关系信息以显着提高人类内部关系模块的性能。即使没有铃铛和哨子,我们的方法也可以竞争或胜过当前比赛的冠军。我们对可可,人群和ochuman数据集进行了广泛的实验。结果表明,所提出的模型超过了所有最新方法。具体而言,该提出的方法分别在众群数据集上达到77.4%的AP和Ochuman数据集上的67.8%AP,从而超过了现有方法的大幅度。此外,消融研究和可视化分析也证明了我们模型的有效性。

In this paper, we present the Intra- and Inter-Human Relation Networks (I^2R-Net) for Multi-Person Pose Estimation. It involves two basic modules. First, the Intra-Human Relation Module operates on a single person and aims to capture Intra-Human dependencies. Second, the Inter-Human Relation Module considers the relation between multiple instances and focuses on capturing Inter-Human interactions. The Inter-Human Relation Module can be designed very lightweight by reducing the resolution of feature map, yet learn useful relation information to significantly boost the performance of the Intra-Human Relation Module. Even without bells and whistles, our method can compete or outperform current competition winners. We conduct extensive experiments on COCO, CrowdPose, and OCHuman datasets. The results demonstrate that the proposed model surpasses all the state-of-the-art methods. Concretely, the proposed method achieves 77.4% AP on CrowPose dataset and 67.8% AP on OCHuman dataset respectively, outperforming existing methods by a large margin. Additionally, the ablation study and visualization analysis also prove the effectiveness of our model.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源