论文标题

NNU-NET的有效贝叶斯不确定性估计

Efficient Bayesian Uncertainty Estimation for nnU-Net

论文作者

Zhao, Yidong, Yang, Changchun, Schweidtmann, Artur, Tao, Qian

论文摘要

自我配置的NNU-NET已在各种医学图像分割挑战中取得了领先的表现。它被广泛认为是首选模型,也是医学图像分割的强基线。但是,尽管NNU-NET表现出色,但并未提供不确定性的量度,以表明其可能的失败。对于大规模图像分割应用程序,这可能是有问题的,其中数据是异质的,而NNU-NET可能会失败而无需通知。在这项工作中,我们引入了一种新型方法,以估计医学图像分割的NNU-NET不确定性。我们提出了一个高效的方案,用于对贝叶斯不确定性估计的重量空间进行后验采样。与以前的基线方法(例如蒙特卡洛辍学者和平均场贝叶斯神经网络)不同,我们提出的方法不需要各种架构,并且使原始的NNU-NET架构完整保持完整,从而保持其出色的性能和易用性。此外,我们通过边缘化多模式后模型来提高原始NNU-NET的分割性能。我们将我们的方法应用于心脏MRI的公共ACDC和M&M数据集,并在一系列基线方法上证明了不确定性估计得到了改善。提出的方法进一步加强了NNU-NET,以分割精度和质量控制方面进行医学图像分割。

The self-configuring nnU-Net has achieved leading performance in a large range of medical image segmentation challenges. It is widely considered as the model of choice and a strong baseline for medical image segmentation. However, despite its extraordinary performance, nnU-Net does not supply a measure of uncertainty to indicate its possible failure. This can be problematic for large-scale image segmentation applications, where data are heterogeneous and nnU-Net may fail without notice. In this work, we introduce a novel method to estimate nnU-Net uncertainty for medical image segmentation. We propose a highly effective scheme for posterior sampling of weight space for Bayesian uncertainty estimation. Different from previous baseline methods such as Monte Carlo Dropout and mean-field Bayesian Neural Networks, our proposed method does not require a variational architecture and keeps the original nnU-Net architecture intact, thereby preserving its excellent performance and ease of use. Additionally, we boost the segmentation performance over the original nnU-Net via marginalizing multi-modal posterior models. We applied our method on the public ACDC and M&M datasets of cardiac MRI and demonstrated improved uncertainty estimation over a range of baseline methods. The proposed method further strengthens nnU-Net for medical image segmentation in terms of both segmentation accuracy and quality control.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源