论文标题

Dug-Recon:使用卷积生成网络进行直接图像重建的框架

DUG-RECON: A Framework for Direct Image Reconstruction using Convolutional Generative Networks

论文作者

Kandarpa, V. S. S., Bousse, Alexandre, Benoit, Didier, Visvikis, Dimitris

论文摘要

本文探讨了卷积生成网络,作为医学图像重建中迭代重建算法的替代方法。医学图像重建的任务涉及从检测器收集到图像域的投影主要数据。该映射通常是通过迭代重建算法进行的,这些算法耗时且计算昂贵。经过训练的深度学习网络提供了更快的输出,如计算机视觉的各种任务所证明的。在这项工作中,我们专门提出了一个直接的重建框架。所提出的框架包括三个部分,即denoing,重建和超级分辨率。 DeNoising和Super分辨率段作为处理步骤。重建段由一个新型的双U-NET发生器(DUG)组成,该发电机(DUG)学习了曲《图像到图像转换》。整个网络经过正电子发射断层扫描(PET)和计算机断层扫描(CT)图像的培训。重建框架近似于从投影域到图像域的二维(2-D)映射。概念验证工作中提出的结构是一种直接形象重建的新方法。在临床环境中实施它需要进一步改进。

This paper explores convolutional generative networks as an alternative to iterative reconstruction algorithms in medical image reconstruction. The task of medical image reconstruction involves mapping of projection main data collected from the detector to the image domain. This mapping is done typically through iterative reconstruction algorithms which are time consuming and computationally expensive. Trained deep learning networks provide faster outputs as proven in various tasks across computer vision. In this work we propose a direct reconstruction framework exclusively with deep learning architectures. The proposed framework consists of three segments, namely denoising, reconstruction and super resolution. The denoising and the super resolution segments act as processing steps. The reconstruction segment consists of a novel double U-Net generator (DUG) which learns the sinogram-to-image transformation. This entire network was trained on positron emission tomography (PET) and computed tomography (CT) images. The reconstruction framework approximates two-dimensional (2-D) mapping from projection domain to image domain. The architecture proposed in this proof-of-concept work is a novel approach to direct image reconstruction; further improvement is required to implement it in a clinical setting.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源