论文标题
两级封闭循环用于运行切片资源管理,为飞行和地面汽车提供服务
Two-level Closed Loops for RAN Slice Resources Management Serving Flying and Ground-based Cars
论文作者
论文摘要
飞行和地面汽车需要各种服务,例如自动驾驶,远程飞行员,信息娱乐和远程诊断。每种服务都需要特定的服务质量(QoS)和网络功能。因此,网络切片可以是满足各种服务要求的解决方案。某些服务(例如信息娱乐)可能有类似的要求提供飞行和地面汽车的要求。因此,有些切片可以提供两种汽车。但是,当网络切片资源共享过于激进时,切片将无法满足QoS要求,在资源不足导致违反QoS的情况下,资源过度提供的资源会导致资源不足。我们提出了两个封闭循环,用于管理汽车的跑步资源以应对这些挑战。首先,我们提出了将资源块(RB)分配给使用切片为汽车提供服务的租户的拍卖机制。其次,我们设计了一个封闭循环,将租户的切片和服务映射到虚拟开放分布式单元(VO-DUS),并将RB分配给VO-DUS以进行管理。第三,我们设计了另一个闭合环,用于内部的RB调度,以服务汽车。第四,我们提出了一个奖励功能,该功能将这两个闭合循环互连以满足每个切片在每个切片上的随时间变化的需求,同时按照延迟满足QoS要求。最后,我们设计了分布的深入增强学习方法,以最大程度地提高配制的奖励功能。仿真结果表明,我们的方法满足了90%以上的VO-DUS资源约束和网络切片要求。
Flying and ground-based cars require various services such as autonomous driving, remote pilot, infotainment, and remote diagnosis. Each service requires specific Quality of Service (QoS) and network features. Therefore, network slicing can be a solution to fulfill the requirements of various services. Some services, such as infotainment, may have similar requirements to serve flying and ground-based cars. Therefore, some slices can serve both kinds of cars. However, when network slice resource sharing is too aggressive, slices can not meet QoS requirements, where resource under-provisioning causes the violation of QoS, and resource over-provisioning causes resource under-utilization. We propose two closed loops for managing RAN slice resources for cars to address these challenges. First, we present an auction mechanism for allocating Resource Block (RB) to the tenants who provide services to the cars using slices. Second, we design one closed loop that maps slices and services of tenants to virtual Open Distributed Units (vO-DUs) and assigns RB to vO-DUs for management purposes. Third, we design another closed loop for intra-slices RB scheduling to serve cars. Fourth, we present a reward function that interconnects these two closed loops to satisfy the time-varying demands of cars at each slice while meeting QoS requirements in terms of delay. Finally, we design distributed deep reinforcement learning approach to maximize the formulated reward function. The simulation results show that our approach satisfies more than 90% vO-DUs resource constraints and network slice requirements.