论文标题

拜占庭齐利的安全联合学习

Byzantine-Resilient Secure Federated Learning

论文作者

So, Jinhyun, Guler, Basak, Avestimehr, A. Salman

论文摘要

安全联合学习是一个保护隐私的框架,可以通过对移动用户收集的大量数据进行培训来改善机器学习模型。这是通过迭代过程实现的,在每次迭代中,用户使用其本地数据集更新全局模型。然后,每个用户通过随机键掩盖其本地模型,并在中央服务器上汇总蒙版模型,以计算下一次迭代的全局模型。由于本地模型受到随机掩码的保护,因此服务器无法观察其真实值。对于对抗性(拜占庭)用户,模型的弹性提出了重大挑战,他们可以通过修改其本地模型或数据集来操纵全局模型。为了应对这一挑战,本文介绍了第一个单人服务拜占庭式安全汇总框架(BREA),以实现安全的联合学习。 BREA基于集成的随机量化,可验证的离群检测以及安全的模型聚合方法,以确保拜占庭式释放,隐私和收敛。我们提供理论融合和隐私保证,并根据网络规模,用户辍学和隐私保护来表征基本的权衡。我们的实验表明,在拜占庭用户的存在下融合,并且与常规联合学习基准的准确性相当。

Secure federated learning is a privacy-preserving framework to improve machine learning models by training over large volumes of data collected by mobile users. This is achieved through an iterative process where, at each iteration, users update a global model using their local datasets. Each user then masks its local model via random keys, and the masked models are aggregated at a central server to compute the global model for the next iteration. As the local models are protected by random masks, the server cannot observe their true values. This presents a major challenge for the resilience of the model against adversarial (Byzantine) users, who can manipulate the global model by modifying their local models or datasets. Towards addressing this challenge, this paper presents the first single-server Byzantine-resilient secure aggregation framework (BREA) for secure federated learning. BREA is based on an integrated stochastic quantization, verifiable outlier detection, and secure model aggregation approach to guarantee Byzantine-resilience, privacy, and convergence simultaneously. We provide theoretical convergence and privacy guarantees and characterize the fundamental trade-offs in terms of the network size, user dropouts, and privacy protection. Our experiments demonstrate convergence in the presence of Byzantine users, and comparable accuracy to conventional federated learning benchmarks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源