论文标题

在复杂场景中,基于自适应环境建模的强化学习以避免碰撞

Adaptive Environment Modeling Based Reinforcement Learning for Collision Avoidance in Complex Scenes

论文作者

Wang, Shuaijun, Gao, Rui, Han, Ruihua, Chen, Shengduo, Li, Chengyang, Hao, Qi

论文摘要

在拥挤的场景中,避免碰撞导航的主要挑战在于准确的环境建模,快速的看法和值得信赖的运动计划政策。本文提出了一个新型的基于自适应环境模型的避免碰撞增强学习(即AEMCARL)的框架,用于在挑战性的导航方案中实现无碰撞动作。这项工作的新颖性是三重的:(1)开发一个用于环境建模的封闭形式单位(GRU)的分层网络; (2)使用注意模块开发自适应感知机制; (3)为加固学习(RL)框架开发自适应奖励功能,以共同培训环境模型,感知功能和运动计划政策。所提出的方法在各种拥挤的场景下使用Gym Gazebo模拟器和一组机器人(Husky and Turtlebot)进行了测试。模拟和实验结果都证明了所提出的方法的优越性能超过了基线方法。

The major challenges of collision avoidance for robot navigation in crowded scenes lie in accurate environment modeling, fast perceptions, and trustworthy motion planning policies. This paper presents a novel adaptive environment model based collision avoidance reinforcement learning (i.e., AEMCARL) framework for an unmanned robot to achieve collision-free motions in challenging navigation scenarios. The novelty of this work is threefold: (1) developing a hierarchical network of gated-recurrent-unit (GRU) for environment modeling; (2) developing an adaptive perception mechanism with an attention module; (3) developing an adaptive reward function for the reinforcement learning (RL) framework to jointly train the environment model, perception function and motion planning policy. The proposed method is tested with the Gym-Gazebo simulator and a group of robots (Husky and Turtlebot) under various crowded scenes. Both simulation and experimental results have demonstrated the superior performance of the proposed method over baseline methods.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源