论文标题
珊瑚:双模式位置识别的彩色结构表示
CORAL: Colored structural representation for bi-modal place recognition
论文作者
论文摘要
对于无漂移定位系统来说,位置识别是必不可少的。由于环境的变化,使用单模式的位置识别具有局限性。在本文中,我们提出了一种双模式的位置识别方法,该方法可以从两种模态和激光雷达中提取复合的全局描述符。具体而言,我们首先构建从3D点产生的高程图像作为结构表示。然后,我们得出3D点和图像像素之间的对应关系,这些对应关系在将像素的视觉特征合并到高程映射网格中。这样,我们将结构特征和视觉特征融合在一致的鸟眼视图框架中,并产生语义表示,即珊瑚。整个网络称为Coral-Vlad。牛津机器人的比较表明,与其他最先进的方法相比,珊瑚vlad具有出色的性能。我们还证明,我们的网络可以推广到跨城市数据集上的其他场景和传感器配置。
Place recognition is indispensable for a drift-free localization system. Due to the variations of the environment, place recognition using single-modality has limitations. In this paper, we propose a bi-modal place recognition method, which can extract a compound global descriptor from the two modalities, vision and LiDAR. Specifically, we first build the elevation image generated from 3D points as a structural representation. Then, we derive the correspondences between 3D points and image pixels that are further used in merging the pixel-wise visual features into the elevation map grids. In this way, we fuse the structural features and visual features in the consistent bird-eye view frame, yielding a semantic representation, namely CORAL. And the whole network is called CORAL-VLAD. Comparisons on the Oxford RobotCar show that CORAL-VLAD has superior performance against other state-of-the-art methods. We also demonstrate that our network can be generalized to other scenes and sensor configurations on cross-city datasets.