论文标题

Fiedler正则化:学习图形稀疏性的神经网络

Fiedler Regularization: Learning Neural Networks with Graph Sparsity

论文作者

Tam, Edric, Dunson, David

论文摘要

我们介绍了一种新颖的正规化方法,以进行深度学习,并结合并尊重神经网络的基本图形结构。现有的正则化方法通常集中于以全球方式降低/惩罚权重,忽略神经网络的连通性结构。我们建议将神经网络基础图的Fiedler值作为正规化工具。我们通过光谱图理论为这种方法提供了理论支持。我们列出了Fiedler值的几个有用的属性,使其适合正规化。我们为神经网络的实践培训提供了大概的变异方法。我们提供此类近似值的界限。我们以结构加权的L1惩罚的形式提供了该框架的替代性但等效的表述,从而将我们的稀疏性诱导方法联系起来。我们在数据集上进行了实验,将Fiedler正则化与传统正则化方法(例如辍学和重量衰减)进行了比较。结果证明了Fiedler正则化的功效。

We introduce a novel regularization approach for deep learning that incorporates and respects the underlying graphical structure of the neural network. Existing regularization methods often focus on dropping/penalizing weights in a global manner that ignores the connectivity structure of the neural network. We propose to use the Fiedler value of the neural network's underlying graph as a tool for regularization. We provide theoretical support for this approach via spectral graph theory. We list several useful properties of the Fiedler value that makes it suitable in regularization. We provide an approximate, variational approach for fast computation in practical training of neural networks. We provide bounds on such approximations. We provide an alternative but equivalent formulation of this framework in the form of a structurally weighted L1 penalty, thus linking our approach to sparsity induction. We performed experiments on datasets that compare Fiedler regularization with traditional regularization methods such as dropout and weight decay. Results demonstrate the efficacy of Fiedler regularization.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源