论文标题
在车辆云网络中,多代理深钢筋学习启用了计算资源分配
Multi-Agent Deep Reinforcement Learning enabled Computation Resource Allocation in a Vehicular Cloud Network
论文作者
论文摘要
在本文中,我们研究了没有集中基础设施支持的分布式临时车辆网络中的计算资源分配问题。为了支持这种车辆网络中不断增长的计算需求,形成了分布式虚拟云网络(VCN),基于该计算资源共享方案通过附近车辆之间的卸载。鉴于VCN中随时间变化的计算资源,详细分析了计算资源的统计分布特性。因此,提出了一种资源感知的组合优化目标机制。为了减轻VCN典型多代理环境造成的非平稳环境,我们采用了集中式培训和分散的执行框架。此外,对于客观优化问题,我们将其建模为马尔可夫游戏,并提出基于DRL的多代理深层确定性增强学习(MADDPG)算法来解决它。有趣的是,为了克服VCN中缺少真正的中央控制单元的困境,实际上以分布式方式在车辆上完成了分配。提出了模拟结果以证明我们计划的有效性。
In this paper, we investigate the computational resource allocation problem in a distributed Ad-Hoc vehicular network with no centralized infrastructure support. To support the ever increasing computational needs in such a vehicular network, the distributed virtual cloud network (VCN) is formed, based on which a computational resource sharing scheme through offloading among nearby vehicles is proposed. In view of the time-varying computational resource in VCN, the statistical distribution characteristics for computational resource are analyzed in detail. Thereby, a resource-aware combinatorial optimization objective mechanism is proposed. To alleviate the non-stationary environment caused by the typically multi-agent environment in VCN, we adopt a centralized training and decentralized execution framework. In addition, for the objective optimization problem, we model it as a Markov game and propose a DRL based multi-agent deep deterministic reinforcement learning (MADDPG) algorithm to solve it. Interestingly, to overcome the dilemma of lacking a real central control unit in VCN, the allocation is actually completed on the vehicles in a distributed manner. The simulation results are presented to demonstrate our scheme's effectiveness.