论文标题

在拜占庭式药物的存在下分布的统计最小最大学习

Distributed Statistical Min-Max Learning in the Presence of Byzantine Agents

论文作者

Adibi, Arman, Mitra, Aritra, Pappas, George J., Hassani, Hamed

论文摘要

近年来,由于其在生成的对抗网络(GAN),强大的控制和优化以及强化学习的背景下,人们对Min-Max优化的主题越来越兴趣。在这种工作方面,我们考虑了一个多代理的Min-Max学习问题,并专注于在这种设置中与最坏的拜占庭对抗者争夺的新兴挑战。通过借鉴强大统计数据的最新结果,我们设计了超梯度算法的强大分布式变体,这是一种流行的算法方法,用于最低量化算法。我们的主要贡献是对拟议的可靠外梯度算法进行清晰的分析,用于平滑的凸环和平滑的凸状凹入功能。具体而言,我们建立了收敛到近似鞍点的统计率。我们的费率几乎是最佳的,并且既揭示了对抗性腐败的影响,又揭示了非故障代理商之间的协作益处。值得注意的是,这是第一篇在存在对抗剂的情况下为大规模分布的最小最大学习提供正式理论保证的论文。

Recent years have witnessed a growing interest in the topic of min-max optimization, owing to its relevance in the context of generative adversarial networks (GANs), robust control and optimization, and reinforcement learning. Motivated by this line of work, we consider a multi-agent min-max learning problem, and focus on the emerging challenge of contending with worst-case Byzantine adversarial agents in such a setup. By drawing on recent results from robust statistics, we design a robust distributed variant of the extra-gradient algorithm - a popular algorithmic approach for min-max optimization. Our main contribution is to provide a crisp analysis of the proposed robust extra-gradient algorithm for smooth convex-concave and smooth strongly convex-strongly concave functions. Specifically, we establish statistical rates of convergence to approximate saddle points. Our rates are near-optimal, and reveal both the effect of adversarial corruption and the benefit of collaboration among the non-faulty agents. Notably, this is the first paper to provide formal theoretical guarantees for large-scale distributed min-max learning in the presence of adversarial agents.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源