论文标题
在未知环境中,基于深入学习的自动探索导航
Deep Reinforcement Learning based Automatic Exploration for Navigation in Unknown Environment
论文作者
论文摘要
本文研究了未知环境下的自动探索问题,这是将机器人系统应用于某些社会任务的关键点。通过堆叠决策规则解决此问题的解决方案不可能涵盖各种环境和传感器属性。基于学习的控制方法对这些情况是适应性的。但是,这些方法受到低学习效率和从模拟到现实的尴尬转移性损害。在本文中,我们通过将勘探过程分解为决策,计划和映射模块,构建了一个一般的探索框架,从而增加了机器人系统的模块化。基于此框架,我们提出了一种基于深入学习的决策算法,该算法使用深度神经网络从部分地图中学习探索策略。结果表明,该提出的算法对未知环境具有更好的学习效率和适应性。此外,我们在物理机器人上进行了实验,结果表明,学到的策略可以很好地从模拟转移到真正的机器人。
This paper investigates the automatic exploration problem under the unknown environment, which is the key point of applying the robotic system to some social tasks. The solution to this problem via stacking decision rules is impossible to cover various environments and sensor properties. Learning based control methods are adaptive for these scenarios. However, these methods are damaged by low learning efficiency and awkward transferability from simulation to reality. In this paper, we construct a general exploration framework via decomposing the exploration process into the decision, planning, and mapping modules, which increases the modularity of the robotic system. Based on this framework, we propose a deep reinforcement learning based decision algorithm which uses a deep neural network to learning exploration strategy from the partial map. The results show that this proposed algorithm has better learning efficiency and adaptability for unknown environments. In addition, we conduct the experiments on the physical robot, and the results suggest that the learned policy can be well transfered from simulation to the real robot.