论文标题
基于学习的基于学习的关节路径和蜂窝连接无人机的能量优化
Reinforcement Learning-based Joint Path and Energy Optimization of Cellular-Connected Unmanned Aerial Vehicles
论文作者
论文摘要
无人驾驶汽车(UAV)最近引起了相当大的研究兴趣。尤其是在物联网领域时,具有Internet连接的无人机是主要需求之一。此外,能量限制,即电池限制是无人机的瓶颈,可以限制其应用程序。我们试图解决和解决能源问题。因此,提出了一种针对蜂窝连接的无人机的路径规划方法,它将通过在配备有动力站(PSS)的某些位置充电,使无人机在比电池范围大得多的区域中计划其路径。除了能量限制外,还有无闪电区。例如,由于空气到空气(A2A)和空气(A2G)干扰或缺乏必要的连通性,因此在无人机的轨迹优化中施加了额外的约束。无闪闪的区域确定应避免的不可行区域。我们已经使用了钢筋学习(RL)层次来扩展典型的短程路径计划者,以考虑电池充电并解决长期任务中无人机的问题。对于飞行大面积的无人机进行了模拟,Q学习算法可以使无人机能够找到最佳的路径和充电策略。
Unmanned Aerial Vehicles (UAVs) have attracted considerable research interest recently. Especially when it comes to the realm of Internet of Things, the UAVs with Internet connectivity are one of the main demands. Furthermore, the energy constraint i.e. battery limit is a bottle-neck of the UAVs that can limit their applications. We try to address and solve the energy problem. Therefore, a path planning method for a cellular-connected UAV is proposed that will enable the UAV to plan its path in an area much larger than its battery range by getting recharged in certain positions equipped with power stations (PSs). In addition to the energy constraint, there are also no-fly zones; for example, due to Air to Air (A2A) and Air to Ground (A2G) interference or for lack of necessary connectivity that impose extra constraints in the trajectory optimization of the UAV. No-fly zones determine the infeasible areas that should be avoided. We have used a reinforcement learning (RL) hierarchically to extend typical short-range path planners to consider battery recharge and solve the problem of UAVs in long missions. The problem is simulated for the UAV that flies over a large area, and Q-learning algorithm could enable the UAV to find the optimal path and recharge policy.