论文标题

加权的MMSE波束形成算法的深度展开

Deep unfolding of the weighted MMSE beamforming algorithm

论文作者

Pellaco, Lissy, Bengtsson, Mats, Jaldén, Joakim

论文摘要

下行链路面向成形是蜂窝网络的关键技术。但是,计算最大化受功率约束的加权总和速率的发射光束形式是NP-固定问题。结果,在实践中使用了收敛到局部最优值的迭代算法。其中,加权的最小平方误误(WMMSE)算法已越来越受欢迎,但是其计算复杂性和随之而来的延迟促使了以绩效为代价的较低复杂性近似值的需求。由于最近在复杂性和绩效之间取舍的近期成功的动机,我们提出了深层展开对WMMSE算法的新颖应用,用于MISO下行链路频道。主要思想包括将WMMSE算法的固定数量迭代映射到可训练的神经网络层中,其架构反映了原始算法的结构。关于传统的端到端学习,深层展开自然地融入了专家知识,并具有直接和良好的建筑选择,更少的可训练参数和更好的解释性的好处。然而,如Shi等人所述,WMMSE算法的表述不得由于矩阵倒置,特征分类和每次迭代执行的一分配搜索而被展开。因此,我们提出了一种替代表述,该配方通过诉诸预测的梯度下降来规避这些操作。通过仿真,我们表明,在大多数设置中,展开的WMMSE胜过或在WMMSE上表现出固定数量的迭代,具有较低的计算负载。

Downlink beamforming is a key technology for cellular networks. However, computing the transmit beamformer that maximizes the weighted sum rate subject to a power constraint is an NP-hard problem. As a result, iterative algorithms that converge to a local optimum are used in practice. Among them, the weighted minimum mean square error (WMMSE) algorithm has gained popularity, but its computational complexity and consequent latency has motivated the need for lower-complexity approximations at the expense of performance. Motivated by the recent success of deep unfolding in the trade-off between complexity and performance, we propose the novel application of deep unfolding to the WMMSE algorithm for a MISO downlink channel. The main idea consists of mapping a fixed number of iterations of the WMMSE algorithm into trainable neural network layers, whose architecture reflects the structure of the original algorithm. With respect to traditional end-to-end learning, deep unfolding naturally incorporates expert knowledge, with the benefits of immediate and well-grounded architecture selection, fewer trainable parameters, and better explainability. However, the formulation of the WMMSE algorithm, as described in Shi et al., is not amenable to be unfolded due to a matrix inversion, an eigendecomposition, and a bisection search performed at each iteration. Therefore, we present an alternative formulation that circumvents these operations by resorting to projected gradient descent. By means of simulations, we show that, in most of the settings, the unfolded WMMSE outperforms or performs equally to the WMMSE for a fixed number of iterations, with the advantage of a lower computational load.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源