论文标题

在联邦学习中兑换隐私,实用性和效率

Trading Off Privacy, Utility and Efficiency in Federated Learning

论文作者

Zhang, Xiaojin, Kang, Yan, Chen, Kai, Fan, Lixin, Yang, Qiang

论文摘要

联合学习(FL)使参与方能够在不披露私人数据信息的情况下协作建立一个全球模型。必须采用适当的保护机制,以满足保存\ textit {privacy}并维护高模型\ textit {striTility}的相对要求。此外,为了实现大规模的模型培训和部署,联合学习系统实现高\ textit {效率}是一项任务。我们提出了一个统一的联合学习框架,可以调和水平和垂直的联合学习。基于此框架,我们制定和量化了隐私泄漏,公用事业损失和降低效率之间的权衡,这使我们成为了联合学习系统的无午餐定理(NFL)定理。 NFL表示,期望FL算法同时在某些情况下提供出色的隐私,实用性和效率是不现实的。然后,我们分析了几种广泛补充的保护机制的隐私泄漏,效用损失和效率降低的下限,包括\ textit {Randomative},\ textit {同构加密},\ textIt {secretit {secret {sertial {sertial {sertion {compression {compression}。我们的分析可以作为选择保护参数以满足特定要求的指南。

Federated learning (FL) enables participating parties to collaboratively build a global model with boosted utility without disclosing private data information. Appropriate protection mechanisms have to be adopted to fulfill the opposing requirements in preserving \textit{privacy} and maintaining high model \textit{utility}. In addition, it is a mandate for a federated learning system to achieve high \textit{efficiency} in order to enable large-scale model training and deployment. We propose a unified federated learning framework that reconciles horizontal and vertical federated learning. Based on this framework, we formulate and quantify the trade-offs between privacy leakage, utility loss, and efficiency reduction, which leads us to the No-Free-Lunch (NFL) theorem for the federated learning system. NFL indicates that it is unrealistic to expect an FL algorithm to simultaneously provide excellent privacy, utility, and efficiency in certain scenarios. We then analyze the lower bounds for the privacy leakage, utility loss and efficiency reduction for several widely-adopted protection mechanisms including \textit{Randomization}, \textit{Homomorphic Encryption}, \textit{Secret Sharing} and \textit{Compression}. Our analysis could serve as a guide for selecting protection parameters to meet particular requirements.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源