论文标题
3D多器官分割的域自适应关系推理
Domain Adaptive Relational Reasoning for 3D Multi-Organ Segmentation
论文作者
论文摘要
在本文中,我们提出了一种新型的无监督域自适应方法(UDA)方法,即命名域自适应关系推理(DARR),以将3D多器官分割模型概括为从不同扫描仪和/或协议(域)收集的医疗数据。我们的方法的灵感来自于医学图像中内部结构之间的空间关系相对固定,例如,脾脏始终位于胰腺的尾部,该胰腺是一个潜在变量,可以传递跨多个域的知识。我们通过求解拼图拼图任务,即从其洗牌贴片中恢复CT扫描,并通过器官分割任务共同训练它来制定空间关系。为了确保学习的空间关系转移到多个领域,我们还介绍了两个方案:1)采用一个超分辨率网络还通过分割模型共同训练了超分辨率网络,以将来自不同域的医学图像标准化到一定的空间分辨率; 2)通过测试时间拼图拼图训练来调整空间关系以进行测试图像。实验结果表明,我们的方法在目标数据集上平均将性能提高了29.60%的DSC,而无需在训练过程中使用目标域的任何数据。
In this paper, we present a novel unsupervised domain adaptation (UDA) method, named Domain Adaptive Relational Reasoning (DARR), to generalize 3D multi-organ segmentation models to medical data collected from different scanners and/or protocols (domains). Our method is inspired by the fact that the spatial relationship between internal structures in medical images is relatively fixed, e.g., a spleen is always located at the tail of a pancreas, which serves as a latent variable to transfer the knowledge shared across multiple domains. We formulate the spatial relationship by solving a jigsaw puzzle task, i.e., recovering a CT scan from its shuffled patches, and jointly train it with the organ segmentation task. To guarantee the transferability of the learned spatial relationship to multiple domains, we additionally introduce two schemes: 1) Employing a super-resolution network also jointly trained with the segmentation model to standardize medical images from different domain to a certain spatial resolution; 2) Adapting the spatial relationship for a test image by test-time jigsaw puzzle training. Experimental results show that our method improves the performance by 29.60% DSC on target datasets on average without using any data from the target domain during training.