论文标题
联合学习的联合隐私增强和量化
Joint Privacy Enhancement and Quantization in Federated Learning
论文作者
论文摘要
联合学习(FL)是使用Edge设备可用的私人数据训练机器学习模型的新兴范式。 FL的分布式操作引起了集中机器学习中未遇到的挑战,包括需要保留本地数据集的隐私以及由于重复交换更新模型而导致的通信负载。这些挑战通常是通过引起更新模型(例如当地差异隐私(LDP)机制和损耗压缩的技术)的技术来单独应对的。在这项工作中,我们提出了一种方法创造的联合隐私增强和量化(JOPEQ),该隐私权共同实现了FL环境中的有损压缩和隐私增强。特别是,Jopeq利用基于随机晶格的矢量量化,这是一种通用压缩技术,其副产品失真在统计学上等同于加性噪声。通过使用专用的多元隐私保护噪声来增强模型更新,可以利用这种失真来增强隐私。我们表明,JOPEQ在持有所需的隐私级别的同时,根据所需的位量同时量化数据,而不会特别影响学习模型的实用性。这是通过分析的LDP保证,失真和收敛范围的推导以及数值研究所示的。最后,我们从经验上断言,Jopeq拆除了已知的普通攻击,以利用隐私泄漏。
Federated learning (FL) is an emerging paradigm for training machine learning models using possibly private data available at edge devices. The distributed operation of FL gives rise to challenges that are not encountered in centralized machine learning, including the need to preserve the privacy of the local datasets, and the communication load due to the repeated exchange of updated models. These challenges are often tackled individually via techniques that induce some distortion on the updated models, e.g., local differential privacy (LDP) mechanisms and lossy compression. In this work we propose a method coined joint privacy enhancement and quantization (JoPEQ), which jointly implements lossy compression and privacy enhancement in FL settings. In particular, JoPEQ utilizes vector quantization based on random lattice, a universal compression technique whose byproduct distortion is statistically equivalent to additive noise. This distortion is leveraged to enhance privacy by augmenting the model updates with dedicated multivariate privacy preserving noise. We show that JoPEQ simultaneously quantizes data according to a required bit-rate while holding a desired privacy level, without notably affecting the utility of the learned model. This is shown via analytical LDP guarantees, distortion and convergence bounds derivation, and numerical studies. Finally, we empirically assert that JoPEQ demolishes common attacks known to exploit privacy leakage.