论文标题
COCOFL:通过部分NN冻结和量化的通信和计算感知的联合学习
CoCoFL: Communication- and Computation-Aware Federated Learning via Partial NN Freezing and Quantization
论文作者
论文摘要
参加联合学习(FL)的设备通常具有异质通信,计算和内存资源。但是,在同步FL中,所有设备都需要按照服务器决定的相同截止日期来完成培训。我们的结果表明,在受约束的设备上训练一个较小的神经网络(NN)的子集,即按照最新状态提出的删除神经元/过滤器,效率低下,阻止了这些设备以对模型做出有效的贡献。这会导致不公平的w.r.t受限设备的可实现精确度,尤其是在跨设备的类标签偏斜的情况下。我们提出了一种新型的FL技术CocoFl,该技术在所有设备上都保持了完整的NN结构。为了适应设备的异质资源,CocoFL冻结并量化了选定的层,减少通信,计算和内存需求,而其他层仍被完全精确地训练,从而可以达到高精度。因此,CocoFl有效地利用了设备上的可用资源,并允许受限的设备对FL系统做出重大贡献,提高参与者之间的公平性(准确性奇偶校验)并显着提高了模型的最终准确性。
Devices participating in federated learning (FL) typically have heterogeneous communication, computation, and memory resources. However, in synchronous FL, all devices need to finish training by the same deadline dictated by the server. Our results show that training a smaller subset of the neural network (NN) at constrained devices, i.e., dropping neurons/filters as proposed by state of the art, is inefficient, preventing these devices to make an effective contribution to the model. This causes unfairness w.r.t the achievable accuracies of constrained devices, especially in cases with a skewed distribution of class labels across devices. We present a novel FL technique, CoCoFL, which maintains the full NN structure on all devices. To adapt to the devices' heterogeneous resources, CoCoFL freezes and quantizes selected layers, reducing communication, computation, and memory requirements, whereas other layers are still trained in full precision, enabling to reach a high accuracy. Thereby, CoCoFL efficiently utilizes the available resources on devices and allows constrained devices to make a significant contribution to the FL system, increasing fairness among participants (accuracy parity) and significantly improving the final accuracy of the model.