论文标题
基于多代理增强学习的分布式物联网系统中的能源收获多跳路程政策
Energy Harvesting Aware Multi-hop Routing Policy in Distributed IoT System Based on Multi-agent Reinforcement Learning
论文作者
论文摘要
能源收集技术提供了一种有希望的解决方案,可以可持续地为不断增长的物联网(IoT)设备供电。但是,由于能源收集的弱和短暂性出现,物联网设备必须间歇性地渲染常规路由策略和能源分配策略不切实际。为此,本文首次开发了一种分布式的多代理增强算法,称为全球参与者 - 批评政策(GAP),以解决能源收集有能力的IoT系统的路由策略和能源分配的问题。在培训阶段,每个物联网设备都被视为代理,并且对所有代理进行培训,以节省计算资源。在推理阶段,可以最大化数据包输送率。实验结果表明,所提出的差距算法的数据传输速率分别比Q-table和ESDSRAA算法的数据传输速率分别达到1.28倍和1.24倍。
Energy harvesting technologies offer a promising solution to sustainably power an ever-growing number of Internet of Things (IoT) devices. However, due to the weak and transient natures of energy harvesting, IoT devices have to work intermittently rendering conventional routing policies and energy allocation strategies impractical. To this end, this paper, for the very first time, developed a distributed multi-agent reinforcement algorithm known as global actor-critic policy (GAP) to address the problem of routing policy and energy allocation together for the energy harvesting powered IoT system. At the training stage, each IoT device is treated as an agent and one universal model is trained for all agents to save computing resources. At the inference stage, packet delivery rate can be maximized. The experimental results show that the proposed GAP algorithm achieves around 1.28 times and 1.24 times data transmission rate than that of the Q-table and ESDSRAA algorithm, respectively.