论文标题
分裂学习的安全分析
Security Analysis of SplitFed Learning
论文作者
论文摘要
分裂学习(SL)和联合学习(FL)是两种突出的分布式协作学习技术,通过允许客户永远不会与其他客户和服务器共享其私人数据,并在智能医疗保健,智能城市和智能行业中罚款广泛的物联网应用程序,以维护数据隐私。先前的工作以中毒攻击的形式广泛探索了FL的安全漏洞。为了减轻这些攻击的影响,还提出了一些防御。最近,两种学习技术的混合体已经出现(通常称为Splitfed),它利用了它们的优势(快速训练)并消除了其内在缺点(集中模型更新)。在本文中,我们对Splitfed的鲁棒性进行了有史以来的第一个经验分析,以实现强大的模型中毒攻击。我们观察到,与已知具有维数的诅咒相比,分裂式中的模型更新具有明显较小的维度。我们表明,具有较高维度的大型模型更容易受到隐私和安全攻击的影响,而Splitfed的客户没有完整的模型,并且具有较低的维度,从而使它们更适合现有的模型中毒攻击。我们的结果表明,与FL相比,由于模型中毒攻击而引起的精度降低了5倍。
Split Learning (SL) and Federated Learning (FL) are two prominent distributed collaborative learning techniques that maintain data privacy by allowing clients to never share their private data with other clients and servers, and fined extensive IoT applications in smart healthcare, smart cities, and smart industry. Prior work has extensively explored the security vulnerabilities of FL in the form of poisoning attacks. To mitigate the effect of these attacks, several defenses have also been proposed. Recently, a hybrid of both learning techniques has emerged (commonly known as SplitFed) that capitalizes on their advantages (fast training) and eliminates their intrinsic disadvantages (centralized model updates). In this paper, we perform the first ever empirical analysis of SplitFed's robustness to strong model poisoning attacks. We observe that the model updates in SplitFed have significantly smaller dimensionality as compared to FL that is known to have the curse of dimensionality. We show that large models that have higher dimensionality are more susceptible to privacy and security attacks, whereas the clients in SplitFed do not have the complete model and have lower dimensionality, making them more robust to existing model poisoning attacks. Our results show that the accuracy reduction due to the model poisoning attack is 5x lower for SplitFed compared to FL.