论文标题
翻译的跳过连接 - 扩大完全卷积神经网络的接收场
Translated Skip Connections -- Expanding the Receptive Fields of Fully Convolutional Neural Networks
论文作者
论文摘要
在设计建筑时,完全卷积神经网络的有效接受场是一个重要的考虑因素,因为它定义了每个卷积内核可见的输入部分。我们提出了一个神经网络模块,扩展了传统的跳过连接,称为翻译跳过连接。翻译后的跳过连接几何地增加了体系结构的接受场,对参数空间和计算复杂性的大小都有忽略不计。通过将翻译的跳过连接嵌入基准结构中,我们证明了我们的模块匹配或优于其他四种方法,以扩大完全卷积神经网络的有效接受领域。我们在来自不同域的五个当代图像分割数据集中证实了这一结果,包括检测共vid-19感染,空中图像分割,公共对象分割和自动驾驶汽车的分割。
The effective receptive field of a fully convolutional neural network is an important consideration when designing an architecture, as it defines the portion of the input visible to each convolutional kernel. We propose a neural network module, extending traditional skip connections, called the translated skip connection. Translated skip connections geometrically increase the receptive field of an architecture with negligible impact on both the size of the parameter space and computational complexity. By embedding translated skip connections into a benchmark architecture, we demonstrate that our module matches or outperforms four other approaches to expanding the effective receptive fields of fully convolutional neural networks. We confirm this result across five contemporary image segmentation datasets from disparate domains, including the detection of COVID-19 infection, segmentation of aerial imagery, common object segmentation, and segmentation for self-driving cars.