论文标题
VIR-SLAM:单一和多机器人系统的视觉,惯性和范围的大满贯
VIR-SLAM: Visual, Inertial, and Ranging SLAM for single and multi-robot systems
论文作者
论文摘要
与惯性测量相结合的单眼相机通常可以提供高性能的视觉惯性探光仪。但是,长长的轨迹可能会很重要,尤其是当环境在视觉上具有挑战性时。在本文中,我们提出了一个系统,该系统利用超宽带范围内的一个静态锚定在环境中,以在可见锚时纠正累积误差。我们还将此设置用于协作大满贯:不同的机器人使用相互距离(如果有的话)和共同的锚来估算彼此之间的转换,促进地图融合我们的系统由两个模块组成:单层范围,视觉频率和唯一的单个机器人的惯性频率,以及用于协作型SLAM的转换估算模块。我们通过模拟超宽带传感器以及实际机器人来测试我们的系统。实验表明,我们的方法的表现可以超过20%以上。对于视觉上具有挑战性的环境,我们的方法甚至可以使用视觉惯性探测器也具有显着的漂移,我们可以以几乎没有额外的计算成本来计算协作大满贯转换矩阵。
Monocular cameras coupled with inertial measurements generally give high performance visual inertial odometry. However, drift can be significant with long trajectories, especially when the environment is visually challenging. In this paper, we propose a system that leverages ultra-wideband ranging with one static anchor placed in the environment to correct the accumulated error whenever the anchor is visible. We also use this setup for collaborative SLAM: different robots use mutual ranging (when available) and the common anchor to estimate the transformation between each other, facilitating map fusion Our system consists of two modules: a double layer ranging, visual, and inertial odometry for single robots, and a transformation estimation module for collaborative SLAM. We test our system on public datasets by simulating an ultra-wideband sensor as well as on real robots. Experiments show our method can outperform state-of-the-art visual-inertial odometry by more than 20%. For visually challenging environments, our method works even the visual-inertial odometry has significant drift Furthermore, we can compute the collaborative SLAM transformation matrix at almost no extra computation cost.