论文标题
使用先前的映射指导循环gan脱水的未配对的水上图像
Unpaired Overwater Image Defogging Using Prior Map Guided CycleGAN
论文作者
论文摘要
基于深度学习的方法已在图像融合方面取得了显着的性能。但是,现有方法主要是针对土地场景开发的,并且在处理倒水雾图像时性能较差,因为水上场景通常包含大量的天空和水。在这项工作中,我们提出了一个先前的地图引导的自行车gan(PG-Cyclegan),用于用水上场景划定图像。为了促进图像中水上对象的恢复,将利用两个损耗函数来为先前的映射设计以倒置黑暗通道,并使用最小值归一化来抑制天空并强调对象。但是,由于未配对的训练集,该网络可能会学习从雾中到无雾图像的受约束域映射,从而导致工件和细节丢失。因此,我们提出了一个直观的升级启动模块(UIM)和一个远程残留的粗到框架(LRC)来减轻此问题。关于定性和定量比较的广泛实验表明,所提出的方法的表现优于最先进的受监督,半监督和无监督的脱颖而出的方法。
Deep learning-based methods have achieved significant performance for image defogging. However, existing methods are mainly developed for land scenes and perform poorly when dealing with overwater foggy images, since overwater scenes typically contain large expanses of sky and water. In this work, we propose a Prior map Guided CycleGAN (PG-CycleGAN) for defogging of images with overwater scenes. To promote the recovery of the objects on water in the image, two loss functions are exploited for the network where a prior map is designed to invert the dark channel and the min-max normalization is used to suppress the sky and emphasize objects. However, due to the unpaired training set, the network may learn an under-constrained domain mapping from foggy to fog-free image, leading to artifacts and loss of details. Thus, we propose an intuitive Upscaling Inception Module (UIM) and a Long-range Residual Coarse-to-fine framework (LRC) to mitigate this issue. Extensive experiments on qualitative and quantitative comparisons demonstrate that the proposed method outperforms the state-of-the-art supervised, semi-supervised, and unsupervised defogging approaches.