论文标题
SDT-DCSCN,用于同时超级分辨率和文本图像的脱毛
SDT-DCSCN for Simultaneous Super-Resolution and Deblurring of Text Images
论文作者
论文摘要
深度卷积神经网络(Deep CNN)为单像超分辨率实现了充满希望的表现。特别是,在自然图像超级分辨率上已成功应用了网络(DCSCN)架构中的深入CNN跳过连接和网络。在这项工作中,我们提出了一种称为SDT-DCSCN的方法,该方法共同执行基于DCSCN的低分辨率模糊文本图像的超分辨率和脱毛。我们的方法在输入和原始尖锐图像中使用了二次模糊图像作为地面真相。所使用的体系结构由输入CNN层中较高数量的过滤器组成,以更好地分析文本详细信息。不同数据集上的定量和定性评估证明了我们模型的高性能以重建高分辨率和尖锐的文本图像。此外,就计算时间而言,与最新方法相比,我们提出的方法具有竞争性能。
Deep convolutional neural networks (Deep CNN) have achieved hopeful performance for single image super-resolution. In particular, the Deep CNN skip Connection and Network in Network (DCSCN) architecture has been successfully applied to natural images super-resolution. In this work we propose an approach called SDT-DCSCN that jointly performs super-resolution and deblurring of low-resolution blurry text images based on DCSCN. Our approach uses subsampled blurry images in the input and original sharp images as ground truth. The used architecture is consists of a higher number of filters in the input CNN layer to a better analysis of the text details. The quantitative and qualitative evaluation on different datasets prove the high performance of our model to reconstruct high-resolution and sharp text images. In addition, in terms of computational time, our proposed method gives competitive performance compared to state of the art methods.