论文标题
参数机器:架构搜索的新方法
Parametric machines: a fresh approach to architecture search
论文作者
论文摘要
使用拓扑和功能分析中的工具,我们提供了一个框架,可以正式描述人造神经网络及其体系结构。我们在一般拓扑环境中定义了机器的概念,并显示了如何将简单的机器组合成更复杂的机器。我们探索有限的和无限的深度机器,这些机器概括了神经网络和神经通用微分方程。从功能分析和内核方法借用思想,我们建立了机器的完整,规范,无限的二维空间,我们讨论了如何在这些空间内找到最佳体系结构和参数,以解决给定的计算问题。在我们的数值实验中,当训练数据集很小时,这些内核灵感的网络可以胜过经典的神经网络。
Using tools from topology and functional analysis, we provide a framework where artificial neural networks, and their architectures, can be formally described. We define the notion of machine in a general topological context and show how simple machines can be combined into more complex ones. We explore finite- and infinite-depth machines, which generalize neural networks and neural ordinary differential equations. Borrowing ideas from functional analysis and kernel methods, we build complete, normed, infinite-dimensional spaces of machines, and we discuss how to find optimal architectures and parameters -- within those spaces -- to solve a given computational problem. In our numerical experiments, these kernel-inspired networks can outperform classical neural networks when the training dataset is small.