论文标题

脑启发的神经元沉默机制可实现可靠的序列识别

Brain inspired neuronal silencing mechanism to enable reliable sequence identification

论文作者

Hodassman, Shiri, Meir, Yuval, Kisos, Karin, Ben-Noam, Itamar, Tugendhaft, Yael, Goldental, Amir, Vardi, Roni, Kanter, Ido

论文摘要

实时序列识别是人工神经网络(ANN)的核心用例,范围从识别时间事件到识别验证代码。现有方法应用了经常性神经网络,这些神经网络遭受培训困难;但是,在没有反馈循环的情况下执行此功能仍然是一个挑战。在这里,我们提出了一种实验性神经元的长期可塑性机制,用于没有反馈回路的高精度前馈序列识别网络(ID-NET),其中输入对象具有给定的顺序和时机。这种机制在其最近的峰值活性后暂时沉默神经元。因此,临时对象作用于不同动态创建的FeedForward子网络。证明ID网络可靠地识别10个手写数字序列,并将其推广到具有在图像序列上训练的连续激活节点的深卷积ANN。违反直觉,即使训练示例数量有限,它们的分类性能对于序列来说很高,但单个对象的较低。还针对作者依赖性识别实现ID网络,并建议作为加密身份验证的加密工具。提出的机制为高级ANN算法打开了新的视野。

Real-time sequence identification is a core use-case of artificial neural networks (ANNs), ranging from recognizing temporal events to identifying verification codes. Existing methods apply recurrent neural networks, which suffer from training difficulties; however, performing this function without feedback loops remains a challenge. Here, we present an experimental neuronal long-term plasticity mechanism for high-precision feedforward sequence identification networks (ID-nets) without feedback loops, wherein input objects have a given order and timing. This mechanism temporarily silences neurons following their recent spiking activity. Therefore, transitory objects act on different dynamically created feedforward sub-networks. ID-nets are demonstrated to reliably identify 10 handwritten digit sequences, and are generalized to deep convolutional ANNs with continuous activation nodes trained on image sequences. Counterintuitively, their classification performance, even with a limited number of training examples, is high for sequences but low for individual objects. ID-nets are also implemented for writer-dependent recognition, and suggested as a cryptographic tool for encrypted authentication. The presented mechanism opens new horizons for advanced ANN algorithms.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源