论文标题
通过图像对准样式转换对脑组织的强大一弹性分割
Robust One-shot Segmentation of Brain Tissues via Image-aligned Style Transformation
论文作者
论文摘要
脑组织的一次性分割通常是一种双模型迭代学习:登记模型(Reg-Model)在未标记的图像上扭曲了一个经过标记的地图集,以初始化其伪口罩以训练细分模型(SEG-Model); SEG模型修改了伪口罩,以增强雷格模型,以在下一次迭代中进行更好的翘曲。但是,在这种双模型迭代中存在一个关键的弱点,即由Reg-Model引起的空间不一致可能会误导SEG模型,这使得它最终会融合下较低的分段性能。在本文中,我们提出了一种新型的图像对准样式转换,以加强双模型迭代学习,以促进脑组织的稳健单发段。具体而言,我们首先利用Reg-Model将地图集扭转到未标记的图像上,然后使用扰动的基于傅立叶的振幅交换来将未标记的图像的样式移植到对齐的地图集中。这使随后的SEG模型可以在Atlas的对齐和样式转移的副本上学习,而不是未标记的图像,这自然可以保证图像掩盖训练对的正确空间对应关系,而不会牺牲未经标记的图像所携带的强度模式的多样性。此外,除了图像级相似性之外,我们还引入了功能吸引内容的一致性,以限制有希望的初始化的reg-Model,从而避免了第一次迭代中图像对准样式转换的崩溃。两个公共数据集的实验结果证明了1)与完全监督的方法相比,我们方法的竞争性分割性能,以及2)比其他最先进的表现优于其他最新性能,平均骰子的平均骰子提高了4.67%。源代码可在以下网址提供:https://github.com/jinxlv/one-shot-sementation-via-ist。
One-shot segmentation of brain tissues is typically a dual-model iterative learning: a registration model (reg-model) warps a carefully-labeled atlas onto unlabeled images to initialize their pseudo masks for training a segmentation model (seg-model); the seg-model revises the pseudo masks to enhance the reg-model for a better warping in the next iteration. However, there is a key weakness in such dual-model iteration that the spatial misalignment inevitably caused by the reg-model could misguide the seg-model, which makes it converge on an inferior segmentation performance eventually. In this paper, we propose a novel image-aligned style transformation to reinforce the dual-model iterative learning for robust one-shot segmentation of brain tissues. Specifically, we first utilize the reg-model to warp the atlas onto an unlabeled image, and then employ the Fourier-based amplitude exchange with perturbation to transplant the style of the unlabeled image into the aligned atlas. This allows the subsequent seg-model to learn on the aligned and style-transferred copies of the atlas instead of unlabeled images, which naturally guarantees the correct spatial correspondence of an image-mask training pair, without sacrificing the diversity of intensity patterns carried by the unlabeled images. Furthermore, we introduce a feature-aware content consistency in addition to the image-level similarity to constrain the reg-model for a promising initialization, which avoids the collapse of image-aligned style transformation in the first iteration. Experimental results on two public datasets demonstrate 1) a competitive segmentation performance of our method compared to the fully-supervised method, and 2) a superior performance over other state-of-the-art with an increase of average Dice by up to 4.67%. The source code is available at: https://github.com/JinxLv/One-shot-segmentation-via-IST.