论文标题

机器人感知可以通过自我监督学习来实现复杂的导航行为

Robot Perception enables Complex Navigation Behavior via Self-Supervised Learning

论文作者

Chancán, Marvin, Milford, Michael

论文摘要

在瞄准长期行为自主权时,机器人系统中的学习视觉运动控制政策是一个基本问题。但是,最近基于监督的基于学习的视力和运动感知系统通常以有限的功能分别构建,而仅限于少数行为技能,例如被动视觉探测器(VO)或移动机器人的视觉定位。在这里,我们提出了一种方法,以通过强化学习(RL)统一那些成功的机器人感知系统,以实现目标驱动目标。我们的方法在时间上融合了紧凑的运动和视觉感知数据 - 直接使用从单个图像序列的自学意识到 - 以实现复杂的目标导航技能。我们使用新的Interactive Citylearn框架展示了我们在两个现实世界驾驶数据集Kitti和Oxford Robotcar上的方法。结果表明,我们的方法可以准确地概括为极端的环境变化,例如,成功率高达80%的白天到夜间周期,而仅视觉导航系统为30%。

Learning visuomotor control policies in robotic systems is a fundamental problem when aiming for long-term behavioral autonomy. Recent supervised-learning-based vision and motion perception systems, however, are often separately built with limited capabilities, while being restricted to few behavioral skills such as passive visual odometry (VO) or mobile robot visual localization. Here we propose an approach to unify those successful robot perception systems for active target-driven navigation tasks via reinforcement learning (RL). Our method temporally incorporates compact motion and visual perception data - directly obtained using self-supervision from a single image sequence - to enable complex goal-oriented navigation skills. We demonstrate our approach on two real-world driving dataset, KITTI and Oxford RobotCar, using the new interactive CityLearn framework. The results show that our method can accurately generalize to extreme environmental changes such as day to night cycles with up to an 80% success rate, compared to 30% for a vision-only navigation systems.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源