论文标题

vau da muntanialas:能源有效的RNN推断的多-DIE可伸缩加速度

Vau da muntanialas: Energy-efficient multi-die scalable acceleration of RNN inference

论文作者

Paulin, Gianna, Conti, Francesco, Cavigelli, Lukas, Benini, Luca

论文摘要

诸如长期短期记忆(LSTM)之类的经常性神经网络通过保持内部状态来学习时间依赖性,使其非常适合诸如语音识别之类的时间序列问题。但是,输出输入反馈在设计RNN的加速器时会产生独特的内存带宽和可扩展性挑战。我们提出了Muntaniala,这是LSTM推理的RNN加速器体系结构,其硅测量能源效率为3.25 $ top/s/s/w $,性能为30.53 $ gop/s $ umc 65 $ nm $ $ nm $技术。 Muntaniala的可扩展设计允许通过在收缩期阵列中组合多个图块来运行大型RNN模型。我们将所有参数都固定在数组中的每个模具上,从而大大降低了I/O通信,以仅加载新功能并与其他模具共享部分结果。为了量化包括I/O功率在内的整体系统功率,我们构建了Vau da Muntanialas,据我们所知,这是RNN加速器的收缩多芯片阵列的首次演示。我们的多-DIE原型在330 $μs$中进行的192个隐藏状态执行LSTM推理,总系统功率为9.0美元$ MW $,以10 $ MHz $消耗2.95 $μj$。针对Muntaniala实施的8/16位量化,我们在TIMIT数据集上的3L-384NH-1123NI LSTM网络上,相对于浮点(FP)的音素错误率(PER)约为3%。

Recurrent neural networks such as Long Short-Term Memories (LSTMs) learn temporal dependencies by keeping an internal state, making them ideal for time-series problems such as speech recognition. However, the output-to-input feedback creates distinctive memory bandwidth and scalability challenges in designing accelerators for RNNs. We present Muntaniala, an RNN accelerator architecture for LSTM inference with a silicon-measured energy-efficiency of 3.25$TOP/s/W$ and performance of 30.53$GOP/s$ in UMC 65 $nm$ technology. The scalable design of Muntaniala allows running large RNN models by combining multiple tiles in a systolic array. We keep all parameters stationary on every die in the array, drastically reducing the I/O communication to only loading new features and sharing partial results with other dies. For quantifying the overall system power, including I/O power, we built Vau da Muntanialas, to the best of our knowledge, the first demonstration of a systolic multi-chip-on-PCB array of RNN accelerator. Our multi-die prototype performs LSTM inference with 192 hidden states in 330$μs$ with a total system power of 9.0$mW$ at 10$MHz$ consuming 2.95$μJ$. Targeting the 8/16-bit quantization implemented in Muntaniala, we show a phoneme error rate (PER) drop of approximately 3% with respect to floating-point (FP) on a 3L-384NH-123NI LSTM network on the TIMIT dataset.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源