论文标题

基于骨架的步态识别的空间变压器网络

Spatial Transformer Network on Skeleton-based Gait Recognition

论文作者

Zhang, Cun, Chen, Xing-Peng, Han, Guo-Qiang, Liu, Xiang-Jie

论文摘要

基于骨架的步态识别模型通常会遇到鲁棒性问题,因为等级1的精度从正常步行案例中的90 \%到与涂层案例行走的70 \%不等。在这项工作中,我们提出了一个名为Gait-TR的最先进的基于强大的骨架的步态识别模型,该模型基于空间变压器框架和时间卷积网络的组合。步态TR在众所周知的步态数据集Casia-B上具有更高准确性和更好的鲁棒性,可以实现与其他基于骨架的步态模型相比。尤其是在与外套案例一起行走时,步态TR获得了90 \%rank-1步态识别精度率,该识别率高于基于轮廓的模型的最佳结果,该模型通常具有比基于Silhouette的步态识别模型更高的精度。此外,我们对CASIA-B的实验表明,与广泛使用的图形卷积网络相比,空间变压器可以从人骨架中提取步态特征。

Skeleton-based gait recognition models usually suffer from the robustness problem, as the Rank-1 accuracy varies from 90\% in normal walking cases to 70\% in walking with coats cases. In this work, we propose a state-of-the-art robust skeleton-based gait recognition model called Gait-TR, which is based on the combination of spatial transformer frameworks and temporal convolutional networks. Gait-TR achieves substantial improvements over other skeleton-based gait models with higher accuracy and better robustness on the well-known gait dataset CASIA-B. Particularly in walking with coats cases, Gait-TR get a 90\% Rank-1 gait recognition accuracy rate, which is higher than the best result of silhouette-based models, which usually have higher accuracy than the silhouette-based gait recognition models. Moreover, our experiment on CASIA-B shows that the spatial transformer can extract gait features from the human skeleton better than the widely used graph convolutional network.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源