论文标题

CircLenet:互助适应可靠的行人检测

CircleNet: Reciprocating Feature Adaptation for Robust Pedestrian Detection

论文作者

Zhang, Tianliang, Han, Zhenjun, Xu, Huijuan, Zhang, Baochang, Ye, Qixiang

论文摘要

野外的行人发现仍然是一个具有挑战性的问题,尤其是当现场包含明显的遮挡和/或低分辨率的行人分辨率时。现有方法在保持可接受的性能的同时无法适应这些困难情况。在本文中,我们提出了一个新型的特征学习模型,称为CircLenet,以模仿人类观察低分辨率和遮挡对象的过程来实现特征适应:如果无法首次清楚地识别该对象,则再次将其重点放在更精细的规模上。 CircLenet被用作一组特征金字塔,并使用重量共享路径增强以更好地融合。它针对使用多个自上而下和自下而上的途径的往复功能适应和迭代对象检测。为了充分利用CircLenet中的特征适应能力,我们设计了一个实例分解训练策略,专注于在每个周期中检测各种分辨率和不同闭塞水平的行人实例。具体而言,CircLenet以端到端的方式进行硬性负增强的想法。在两个行人检测数据集(Caltech和Citypersons)上进行的实验表明,CircleNet提高了遮挡和低分辨率行人的性能,并具有明显的利润率,同时在正常情况下保持良好的性能。

Pedestrian detection in the wild remains a challenging problem especially when the scene contains significant occlusion and/or low resolution of the pedestrians to be detected. Existing methods are unable to adapt to these difficult cases while maintaining acceptable performance. In this paper we propose a novel feature learning model, referred to as CircleNet, to achieve feature adaptation by mimicking the process humans looking at low resolution and occluded objects: focusing on it again, at a finer scale, if the object can not be identified clearly for the first time. CircleNet is implemented as a set of feature pyramids and uses weight sharing path augmentation for better feature fusion. It targets at reciprocating feature adaptation and iterative object detection using multiple top-down and bottom-up pathways. To take full advantage of the feature adaptation capability in CircleNet, we design an instance decomposition training strategy to focus on detecting pedestrian instances of various resolutions and different occlusion levels in each cycle. Specifically, CircleNet implements feature ensemble with the idea of hard negative boosting in an end-to-end manner. Experiments on two pedestrian detection datasets, Caltech and CityPersons, show that CircleNet improves the performance of occluded and low-resolution pedestrians with significant margins while maintaining good performance on normal instances.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源