论文标题
Lindt:通过本地适应来解决负面的联邦学习
LINDT: Tackling Negative Federated Learning with Local Adaptation
论文作者
论文摘要
联合学习(FL)是一个有希望的分布式学习范式,它允许许多数据所有者(也称为客户)协作学习共享模型而无需透露每个客户的数据。但是,在我们称为负联邦学习(NFL)的状态下,FL可能无法正确进行。本文解决了负联邦学习的问题。我们制定了对NFL的严格定义,并分析了其基本原因。我们提出了一个名为Lindt的新颖框架,用于在运行时解决NFL。该框架可能有可能与任何基于神经网络的FL系统一起用于NFL检测和恢复。具体而言,我们引入了一个指标,用于从服务器检测NFL。在NFL恢复的情况下,该框架通过学习一层相互交织的双模型来适应每个客户端本地数据的联合模型。实验结果表明,在NFL的各种情况下,提出的方法可以显着提高FL在本地数据上的性能。
Federated Learning (FL) is a promising distributed learning paradigm, which allows a number of data owners (also called clients) to collaboratively learn a shared model without disclosing each client's data. However, FL may fail to proceed properly, amid a state that we call negative federated learning (NFL). This paper addresses the problem of negative federated learning. We formulate a rigorous definition of NFL and analyze its essential cause. We propose a novel framework called LINDT for tackling NFL in run-time. The framework can potentially work with any neural-network-based FL systems for NFL detection and recovery. Specifically, we introduce a metric for detecting NFL from the server. On occasion of NFL recovery, the framework makes adaptation to the federated model on each client's local data by learning a Layer-wise Intertwined Dual-model. Experiment results show that the proposed approach can significantly improve the performance of FL on local data in various scenarios of NFL.