论文标题

LDPC解码器中的学习量化

Learning Quantization in LDPC Decoders

论文作者

Geiselhart, Marvin, Elkelesh, Ahmed, Clausius, Jannis, Liang, Fei, Xu, Wen, Liang, Jing, Brink, Stephan ten

论文摘要

查找最佳消息量化是低复杂性信念传播(BP)解码的关键要求。为此,我们提出了一个浮点替代模型,该模型模仿量化效应,作为均匀噪声的添加,其幅度是可训练的变量。我们验证替代模型与定点实现的行为非常匹配,并提出了手工制作的损失功能,以实现复杂性和误差效果之间的权衡。然后应用一种基于深度学习的方法来优化消息位。此外,我们表明参数共享既可以确保实施友好的解决方案,又比独立参数导致更快的培训融合。我们为5G低密度平等检查(LDPC)代码提供模拟结果,并在浮点分解的0.2 dB内报告误差率性能,以3.1位的平均消息量化位数为3.1位。此外,我们表明,学到的位宽也将其推广到其他代码率和渠道。

Finding optimal message quantization is a key requirement for low complexity belief propagation (BP) decoding. To this end, we propose a floating-point surrogate model that imitates quantization effects as additions of uniform noise, whose amplitudes are trainable variables. We verify that the surrogate model closely matches the behavior of a fixed-point implementation and propose a hand-crafted loss function to realize a trade-off between complexity and error-rate performance. A deep learning-based method is then applied to optimize the message bitwidths. Moreover, we show that parameter sharing can both ensure implementation-friendly solutions and results in faster training convergence than independent parameters. We provide simulation results for 5G low-density parity-check (LDPC) codes and report an error-rate performance within 0.2 dB of floating-point decoding at an average message quantization bitwidth of 3.1 bits. In addition, we show that the learned bitwidths also generalize to other code rates and channels.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源