论文标题

通过分散的数据集蒸馏在资源受限的边缘环境中通过分散的数据集蒸馏的联合学习

Federated Learning via Decentralized Dataset Distillation in Resource-Constrained Edge Environments

论文作者

Song, Rui, Liu, Dai, Chen, Dave Zhenyu, Festag, Andreas, Trinitis, Carsten, Schulz, Martin, Knoll, Alois

论文摘要

在联合学习中,所有网络客户均合作为模型培训做出了贡献。但是,随着模型尺寸的增加,即使共享训练有素的部分模型,也常常会导致基础网络中的严重通信​​瓶颈,尤其是在迭代进行交流时。在本文中,我们介绍了一个联合学习框架FedD3,仅通过集成数据集蒸馏实例,仅需要一击通信。 FedD3不用在其他联合学习方法中共享模型更新,而是允许连接的客户独立提炼本地数据集,然后将这些分散的蒸馏数据集(例如,一些无法识别的图像)从网络中汇总以进行模型培训。我们的实验结果表明,在所需的沟通量方面,FedD3显着优于其他联合学习框架,同时,根据使用情况或目标数据集,它为能够在准确性和沟通成本之间的权衡取得平衡而提供了额外的好处。例如,对于在CIFAR-10上训练Alexnet模型,在非独立且分布式(非IID)设置下的10个客户端可以通过相似的通信量增加71%以上,或者可以节省98%的通信量,而与其他一张一杆的联邦学习方法相比,FEDD3可以将精度提高71%以上。

In federated learning, all networked clients contribute to the model training cooperatively. However, with model sizes increasing, even sharing the trained partial models often leads to severe communication bottlenecks in underlying networks, especially when communicated iteratively. In this paper, we introduce a federated learning framework FedD3 requiring only one-shot communication by integrating dataset distillation instances. Instead of sharing model updates in other federated learning approaches, FedD3 allows the connected clients to distill the local datasets independently, and then aggregates those decentralized distilled datasets (e.g. a few unrecognizable images) from networks for model training. Our experimental results show that FedD3 significantly outperforms other federated learning frameworks in terms of needed communication volumes, while it provides the additional benefit to be able to balance the trade-off between accuracy and communication cost, depending on usage scenario or target dataset. For instance, for training an AlexNet model on CIFAR-10 with 10 clients under non-independent and identically distributed (Non-IID) setting, FedD3 can either increase the accuracy by over 71% with a similar communication volume, or save 98% of communication volume, while reaching the same accuracy, compared to other one-shot federated learning approaches.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源