论文标题

多层神经网络作为合并操作的替代品

Multi Layer Neural Networks as Replacement for Pooling Operations

论文作者

Fuhl, Wolfgang, Kasneci, Enkelejda

论文摘要

在几乎每个现代神经网络中都可以找到合并操作,可以以低成本计算,并用作降低数据减少的线性或非线性传输函数。无数的现代方法已经解决了替换共同的最大值选择和平均值操作的问题,更不用说提供允许通过更改参数选择不同功能的函数。其他神经网络用于估计这些合并函数的参数。结果,合并层可能需要补充参数以增加整个模型的复杂性。在这项工作中,我们表明,一个感知者可以有效地用作合并操作,而无需增加模型的复杂性。这种合并允许通过重组数据并将多层神经网络直接集成为模型作为汇总操作,从而将数据集成在学习中,从而将复杂的汇总操作学习。我们将张量卷积的方法与大步进行比较,并表明我们的方法既有效又降低了复杂性。数据结合与多个感知器结合使用,可以将我们的方法用于升级,然后可以将其用于语义分割中的转换卷积。

Pooling operations, which can be calculated at low cost and serve as a linear or nonlinear transfer function for data reduction, are found in almost every modern neural network. Countless modern approaches have already tackled replacing the common maximum value selection and mean value operations, not to mention providing a function that allows different functions to be selected through changing parameters. Additional neural networks are used to estimate the parameters of these pooling functions.Consequently, pooling layers may require supplementary parameters to increase the complexity of the whole model. In this work, we show that one perceptron can already be used effectively as a pooling operation without increasing the complexity of the model. This kind of pooling allows for the integration of multi-layer neural networks directly into a model as a pooling operation by restructuring the data and, as a result, learnin complex pooling operations. We compare our approach to tensor convolution with strides as a pooling operation and show that our approach is both effective and reduces complexity. The restructuring of the data in combination with multiple perceptrons allows for our approach to be used for upscaling, which can then be utilized for transposed convolutions in semantic segmentation.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源