论文标题

无线量化联合学习:联合计算和通信设计

Wireless Quantized Federated Learning: A Joint Computation and Communication Design

论文作者

Bouzinis, Pavlos S., Diamantoulakis, Panagiotis D., Karagiannidis, George K.

论文摘要

最近,联邦学习(FL)引发了广泛的关注,这是一种有希望的分散机器学习方法,可提供隐私和低延迟。但是,通信瓶颈仍然构成一个问题,需要解决该问题以有效地部署无线网络。在本文中,我们旨在通过在上行链路传输之前量化本地模型参数来最大程度地减少FL的总收敛时间。更具体地说,首先提出了使用随机量化的FL算法的收敛分析,这揭示了量化误差对收敛速率的影响。之后,我们共同优化了量化位的计算,通信资源和数量,以确保所有全球回合的收敛时间最小化,但要遵守能量和量化误差要求,这源于收敛分析。评估了量化误差对收敛时间的影响,并揭示了模型准确性和及时执行之间的权衡。此外,与基线方案相比,所提出的方法显示出可以更快的收敛性。最后,提供了用于选择量化误差容忍度的有用见解。

Recently, federated learning (FL) has sparked widespread attention as a promising decentralized machine learning approach which provides privacy and low delay. However, communication bottleneck still constitutes an issue, that needs to be resolved for an efficient deployment of FL over wireless networks. In this paper, we aim to minimize the total convergence time of FL, by quantizing the local model parameters prior to uplink transmission. More specifically, the convergence analysis of the FL algorithm with stochastic quantization is firstly presented, which reveals the impact of the quantization error on the convergence rate. Following that, we jointly optimize the computing, communication resources and number of quantization bits, in order to guarantee minimized convergence time across all global rounds, subject to energy and quantization error requirements, which stem from the convergence analysis. The impact of the quantization error on the convergence time is evaluated and the trade-off among model accuracy and timely execution is revealed. Moreover, the proposed method is shown to result in faster convergence in comparison with baseline schemes. Finally, useful insights for the selection of the quantization error tolerance are provided.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源