论文标题

Graphon合作多机构增强学习的平均场控制

Graphon Mean-Field Control for Cooperative Multi-Agent Reinforcement Learning

论文作者

Hu, Yuanquan, Wei, Xiaoli, Yan, Junji, Zhang, Hengxi

论文摘要

平均场理论与强化学习之间的婚姻表明,可以解决与均匀代理的大规模控制问题的重要能力。为了打破平均场理论的同质性限制,最近的兴趣是将Graphon理论引入平均场范式。在本文中,我们提出了一个图形均值场控制(GMFC)框架,以近似合作的多代理增强学习(MARL),并表明大致顺序为$ \ Mathcal {o}(\ frac {1} {\ frac {1} {\ sqrt {n} {n}}} $ n $ n $ norkent $ n $。通过离散GMFC的图形指数,我们进一步引入了一个较小的GMFC,称为Block GMFC,该类别显示出近似于合作MARL的近似值。我们对几个例子的经验研究表明,我们的GMFC方法与最先进的MARL算法相当,同时享有更好的可扩展性。

The marriage between mean-field theory and reinforcement learning has shown a great capacity to solve large-scale control problems with homogeneous agents. To break the homogeneity restriction of mean-field theory, a recent interest is to introduce graphon theory to the mean-field paradigm. In this paper, we propose a graphon mean-field control (GMFC) framework to approximate cooperative multi-agent reinforcement learning (MARL) with nonuniform interactions and show that the approximate order is of $\mathcal{O}(\frac{1}{\sqrt{N}})$, with $N$ the number of agents. By discretizing the graphon index of GMFC, we further introduce a smaller class of GMFC called block GMFC, which is shown to well approximate cooperative MARL. Our empirical studies on several examples demonstrate that our GMFC approach is comparable with the state-of-art MARL algorithms while enjoying better scalability.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源