论文标题

深度自动编码器:从理解到概括保证

Deep Autoencoders: From Understanding to Generalization Guarantees

论文作者

Cosentino, Romain, Balestriero, Randall, Baraniuk, Richard, Aazhang, Behnaam

论文摘要

深度学习中的一个大谜团继续是方法在模型参数数量大于训练示例的数量时概括的能力。在这项工作中,我们朝着更好地理解深度自动编码器(AES)的基本现象的一步,这是一种主流的深度学习解决方案,用于学习压缩,可解释和结构化的数据表示。特别是,我们通过利用其连续的分段仿射结构来解释AE如何近似数据歧管。我们对AES的重新制定为它们的映射,重建保证以及对常用正规化技术的解释提供了新的见解。我们利用这些发现来得出两个新的正规化,使AES能够捕获数据中的固有对称性。我们的正常化利用了转换学习组的最新进展,使AE可以更好地近似数据流形,而无需明确定义歧管的基础。在假设数据的对称性可以通过谎言组解释的假设,我们证明正规化确保了相应的AES的概括。一系列的实验评估表明,我们的方法的表现优于其他最新的正规化技术。

A big mystery in deep learning continues to be the ability of methods to generalize when the number of model parameters is larger than the number of training examples. In this work, we take a step towards a better understanding of the underlying phenomena of Deep Autoencoders (AEs), a mainstream deep learning solution for learning compressed, interpretable, and structured data representations. In particular, we interpret how AEs approximate the data manifold by exploiting their continuous piecewise affine structure. Our reformulation of AEs provides new insights into their mapping, reconstruction guarantees, as well as an interpretation of commonly used regularization techniques. We leverage these findings to derive two new regularizations that enable AEs to capture the inherent symmetry in the data. Our regularizations leverage recent advances in the group of transformation learning to enable AEs to better approximate the data manifold without explicitly defining the group underlying the manifold. Under the assumption that the symmetry of the data can be explained by a Lie group, we prove that the regularizations ensure the generalization of the corresponding AEs. A range of experimental evaluations demonstrate that our methods outperform other state-of-the-art regularization techniques.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源