论文标题

模拟神经形态计算的替代梯度

Surrogate gradients for analog neuromorphic computing

论文作者

Cramer, Benjamin, Billaudelle, Sebastian, Kanya, Simeon, Leibfried, Aron, Grübl, Andreas, Karasenko, Vitali, Pehle, Christian, Schreiber, Korbinian, Stradmann, Yannik, Weis, Johannes, Schemmel, Johannes, Zenke, Friedemann

论文摘要

为了快速以低代谢成本处理时间信息,生物神经元将输入作为模拟总和,但与峰值,二进制事件进行通信。模拟神经形态硬件使用相同的原理来模仿具有出色的能效的尖峰神经网络。但是,由于设备不匹配和缺乏有效的培训算法,因此在此类硬件上实例化高效的尖峰网络仍然是一个重大挑战。在这里,我们介绍了一个基于解决这些问题的替代梯度的一般内部学习框架。使用BrainScales-2神经形态系统,我们表明,学习对设备不匹配的自我校正导致视觉和语音基准上的竞争性尖峰网络性能。我们的网络显示稀疏的尖峰活动,平均每个隐藏的神经元和输入少于一个尖峰,以高达85 K帧/秒的速度进行推断,并且消耗少于200 mW。总而言之,我们的工作为模拟神经形态硬件的低能尖峰网络处理设定了几个新的基准测试,并为未来的片上学习算法铺平了道路。

To rapidly process temporal information at a low metabolic cost, biological neurons integrate inputs as an analog sum but communicate with spikes, binary events in time. Analog neuromorphic hardware uses the same principles to emulate spiking neural networks with exceptional energy-efficiency. However, instantiating high-performing spiking networks on such hardware remains a significant challenge due to device mismatch and the lack of efficient training algorithms. Here, we introduce a general in-the-loop learning framework based on surrogate gradients that resolves these issues. Using the BrainScaleS-2 neuromorphic system, we show that learning self-corrects for device mismatch resulting in competitive spiking network performance on both vision and speech benchmarks. Our networks display sparse spiking activity with, on average, far less than one spike per hidden neuron and input, perform inference at rates of up to 85 k frames/second, and consume less than 200 mW. In summary, our work sets several new benchmarks for low-energy spiking network processing on analog neuromorphic hardware and paves the way for future on-chip learning algorithms.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源