论文标题

点-UNET:基于上下文感知点的神经网络,用于体积分段

Point-Unet: A Context-aware Point-based Neural Network for Volumetric Segmentation

论文作者

Ho, Ngoc-Vuong, Nguyen, Tan, Diep, Gia-Han, Le, Ngan, Hua, Binh-Son

论文摘要

使用深度学习的医学图像分析最近很普遍,显示出各种下游任务的表现出色,包括医疗图像分割及其兄弟姐妹,体积图像分割。尤其是,典型的体积分割网络强烈依赖于体素电网表示,将体积数据视为单个体素“切片”堆栈,从而使学习能够将体素电网分割为将现有的基于图像的分段网络扩展到3D域一样简单。但是,使用体素电网表示需要大量的内存足迹,昂贵的测试时间并限制解决方案的可扩展性。在本文中,我们提出了一种新方法,将深度学习的效率与3D点云一起纳入体积分段。我们的关键思想是首先通过学习注意力概率图来预测音量中感兴趣的区域,然后将其用于将体积用于稀疏点云中,该云随后使用基于点的神经网络进行了分割。我们已经进行了有关医疗体积分段任务的实验,其中包括小规模的数据集胰腺和大型数据集Brats18,Brats19和Brats20挑战。关于不同指标的全面基准表明,我们的上下文感知点 - 不强大地优于基于SOTA VOXEL的网络,这两个网络都可以在精确度上,训练期间的内存使用以及测试过程中的时间消耗。我们的代码可在https://github.com/vinairesearch/point-unet上找到。

Medical image analysis using deep learning has recently been prevalent, showing great performance for various downstream tasks including medical image segmentation and its sibling, volumetric image segmentation. Particularly, a typical volumetric segmentation network strongly relies on a voxel grid representation which treats volumetric data as a stack of individual voxel `slices', which allows learning to segment a voxel grid to be as straightforward as extending existing image-based segmentation networks to the 3D domain. However, using a voxel grid representation requires a large memory footprint, expensive test-time and limiting the scalability of the solutions. In this paper, we propose Point-Unet, a novel method that incorporates the efficiency of deep learning with 3D point clouds into volumetric segmentation. Our key idea is to first predict the regions of interest in the volume by learning an attentional probability map, which is then used for sampling the volume into a sparse point cloud that is subsequently segmented using a point-based neural network. We have conducted the experiments on the medical volumetric segmentation task with both a small-scale dataset Pancreas and large-scale datasets BraTS18, BraTS19, and BraTS20 challenges. A comprehensive benchmark on different metrics has shown that our context-aware Point-Unet robustly outperforms the SOTA voxel-based networks at both accuracies, memory usage during training, and time consumption during testing. Our code is available at https://github.com/VinAIResearch/Point-Unet.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源