论文标题

细分作为有效的在线多对象跟踪和细分的要点

Segment as Points for Efficient Online Multi-Object Tracking and Segmentation

论文作者

Xu, Zhenbo, Zhang, Wei, Tan, Xiao, Yang, Wei, Huang, Huan, Wen, Shilei, Ding, Errui, Huang, Liusheng

论文摘要

当前的多对象跟踪和分割(MOTS)方法遵循逐个检测范式跟踪并采用卷积进行特征提取。但是,在受固有的接收场的影响下,基于卷积的特征提取不可避免地会混合前景特征和背景特征,从而导致随后的实例关联中的歧义。在本文中,我们通过将紧凑的图像表示形式转换为未订购的2D点云表示,提出了一种基于段的学习实例嵌入的高效方法。我们的方法生成了一个新的逐点跟踪范式,其中从随机选择的点而不是图像中学习了歧视实例嵌入。此外,将多种信息性数据模式转换为点的表示形式,以丰富点的特征。所得的在线MOTS框架(称为PointTrack)超过了所有最新方法,包括大幅度的3D跟踪方法(MOTSA高5.4%,比MotsFusion高出5.4%,而MOTSFusion的速度快18倍),其实时速度(22 fps)。三个数据集的评估既证明了我们方法的有效性和效率。此外,基于当前MOTS数据集缺乏拥挤的场景的观察,我们构建了一个更具挑战性的MOTS数据集,名为Apollo MOTS具有更高的实例密度。 Apollo Mots和我们的代码均在https://github.com/detectrecog/pointtrack上公开获得。

Current multi-object tracking and segmentation (MOTS) methods follow the tracking-by-detection paradigm and adopt convolutions for feature extraction. However, as affected by the inherent receptive field, convolution based feature extraction inevitably mixes up the foreground features and the background features, resulting in ambiguities in the subsequent instance association. In this paper, we propose a highly effective method for learning instance embeddings based on segments by converting the compact image representation to un-ordered 2D point cloud representation. Our method generates a new tracking-by-points paradigm where discriminative instance embeddings are learned from randomly selected points rather than images. Furthermore, multiple informative data modalities are converted into point-wise representations to enrich point-wise features. The resulting online MOTS framework, named PointTrack, surpasses all the state-of-the-art methods including 3D tracking methods by large margins (5.4% higher MOTSA and 18 times faster over MOTSFusion) with the near real-time speed (22 FPS). Evaluations across three datasets demonstrate both the effectiveness and efficiency of our method. Moreover, based on the observation that current MOTS datasets lack crowded scenes, we build a more challenging MOTS dataset named APOLLO MOTS with higher instance density. Both APOLLO MOTS and our codes are publicly available at https://github.com/detectRecog/PointTrack.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源