论文标题

DSFORMER:一种双域自制变压器,用于加速多对比度MRI重建

DSFormer: A Dual-domain Self-supervised Transformer for Accelerated Multi-contrast MRI Reconstruction

论文作者

Zhou, Bo, Dey, Neel, Schlemper, Jo, Salehi, Seyed Sadegh Mohseni, Liu, Chi, Duncan, James S., Sofka, Michal

论文摘要

多对比度MRI(MC-MRI)捕获了多种互补成像方式,以帮助放射决策。鉴于需要降低多次收购的时间成本,当前的深度加速MRI重建网络着重于利用多个对比度之间的冗余。但是,现有的作品在很大程度上受到了配对数据和/或过度昂贵的完全采样的MRI序列的监督。此外,重建网络通常依赖于卷积体系结构,这些卷积体系结构受到建模远程相互作用的能力的限制,并且可能导致良好的解剖学细节的次优恢复。对于这些目的,我们提出了一个双域自我监督的变压器(DSFORMER),用于加速MC-MRI重建。 DSFORMER开发了一个深层条件级联变压器(DCCT),该变压器由几个级联的Swin Transformer重建网络(SWINRN)组成,该网络(SWINRN)在两种深度调节策略下训练,以实现MC-MRI信息共享。我们进一步提出了DCCT的双域(图像和K空间)自我监督的学习策略,以减轻获取完全采样的培训数据的成本。 DSFormer会生成高保真重建,从而超过电流完全监督的基线。此外,我们发现,在接受全面监督或我们提出的双域自学训练时,DSFORMER可以实现几乎相同的性能。

Multi-contrast MRI (MC-MRI) captures multiple complementary imaging modalities to aid in radiological decision-making. Given the need for lowering the time cost of multiple acquisitions, current deep accelerated MRI reconstruction networks focus on exploiting the redundancy between multiple contrasts. However, existing works are largely supervised with paired data and/or prohibitively expensive fully-sampled MRI sequences. Further, reconstruction networks typically rely on convolutional architectures which are limited in their capacity to model long-range interactions and may lead to suboptimal recovery of fine anatomical detail. To these ends, we present a dual-domain self-supervised transformer (DSFormer) for accelerated MC-MRI reconstruction. DSFormer develops a deep conditional cascade transformer (DCCT) consisting of several cascaded Swin transformer reconstruction networks (SwinRN) trained under two deep conditioning strategies to enable MC-MRI information sharing. We further present a dual-domain (image and k-space) self-supervised learning strategy for DCCT to alleviate the costs of acquiring fully sampled training data. DSFormer generates high-fidelity reconstructions which experimentally outperform current fully-supervised baselines. Moreover, we find that DSFormer achieves nearly the same performance when trained either with full supervision or with our proposed dual-domain self-supervision.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源