论文标题
对齐,然后记住:通过反馈对齐的学习动态
Align, then memorise: the dynamics of learning with feedback alignment
论文作者
论文摘要
直接反馈对齐(DFA)正在成为一种有效且在生物学上合理的替代方案,用于训练深神经网络的无处不在反向传播算法。尽管依靠向后传球的随机反馈权重,但DFA还是成功训练了最先进的模型,例如变压器。另一方面,众所周知,它未能训练卷积网络。对DFA解释这些不同结果的内部工作的理解仍然难以捉摸。在这里,我们提出了DFA成功的理论。我们首先表明,浅网络中的学习分为两个步骤:对齐阶段,该模型适应其权重以使近似梯度与损失函数的真实梯度保持一致,然后是记忆阶段,该阶段该模型侧重于拟合数据。这个两步过程具有退化性破裂效果:在景观中的所有低损失解决方案中,一个经过DFA训练的网络自然收敛到最大化梯度比对的溶液。我们还确定了深线性网络中的关键数量对准:对齐矩阵的条件。后者可以详细了解数据结构对对齐方式的影响,并提出了一个简单的解释DFA训练卷积神经网络的众所周知失败。 MNIST和CIFAR10上的数值实验清楚地证明了深度非线性网络中的退化性破裂,并表明对齐 - 随机 - 安装过程是从网络的底层依次出现的。
Direct Feedback Alignment (DFA) is emerging as an efficient and biologically plausible alternative to the ubiquitous backpropagation algorithm for training deep neural networks. Despite relying on random feedback weights for the backward pass, DFA successfully trains state-of-the-art models such as Transformers. On the other hand, it notoriously fails to train convolutional networks. An understanding of the inner workings of DFA to explain these diverging results remains elusive. Here, we propose a theory for the success of DFA. We first show that learning in shallow networks proceeds in two steps: an alignment phase, where the model adapts its weights to align the approximate gradient with the true gradient of the loss function, is followed by a memorisation phase, where the model focuses on fitting the data. This two-step process has a degeneracy breaking effect: out of all the low-loss solutions in the landscape, a network trained with DFA naturally converges to the solution which maximises gradient alignment. We also identify a key quantity underlying alignment in deep linear networks: the conditioning of the alignment matrices. The latter enables a detailed understanding of the impact of data structure on alignment, and suggests a simple explanation for the well-known failure of DFA to train convolutional neural networks. Numerical experiments on MNIST and CIFAR10 clearly demonstrate degeneracy breaking in deep non-linear networks and show that the align-then-memorise process occurs sequentially from the bottom layers of the network to the top.