论文标题
通过基于学习的两弹性虚拟自动对焦快速整体幻灯片成像
Rapid Whole Slide Imaging via Learning-based Two-shot Virtual Autofocusing
论文作者
论文摘要
整个幻灯片成像(WSI)是用于数字病理的新兴技术。自动关注的过程是WSI性能的主要影响。传统的自动关注方法要么由于重复的机械动作而耗时,要么需要其他硬件,因此与当前的WSI系统不兼容。在本文中,我们提出了\ textIt {Virtual Autofocusiss}的概念,该概念不依赖机械调整来进行重新聚焦,而是以基于离线学习的方式恢复了聚焦图像。在初始焦点位置的情况下,我们只执行两次成像,相比之下,通常需要在每个瓷砖扫描中进行多达21倍的图像拍摄。考虑到这两个捕获的焦点图像保留了有关基础焦点内图像的部分信息,因此我们提出了一种基于U-Net启发的深神经网络方法,以将它们融合为恢复的聚焦内图像。提出的方案在组织滑动扫描中很快,从而使高通量生成数字病理图像。实验结果表明,我们的方案实现了令人满意的重新聚焦性能。
Whole slide imaging (WSI) is an emerging technology for digital pathology. The process of autofocusing is the main influence of the performance of WSI. Traditional autofocusing methods either are time-consuming due to repetitive mechanical motions, or require additional hardware and thus are not compatible to current WSI systems. In this paper, we propose the concept of \textit{virtual autofocusing}, which does not rely on mechanical adjustment to conduct refocusing but instead recovers in-focus images in an offline learning-based manner. With the initial focal position, we only perform two-shot imaging, in contrast traditional methods commonly need to conduct as many as 21 times image shooting in each tile scanning. Considering that the two captured out-of-focus images retain pieces of partial information about the underlying in-focus image, we propose a U-Net-inspired deep neural network based approach for fusing them into a recovered in-focus image. The proposed scheme is fast in tissue slides scanning, enabling a high-throughput generation of digital pathology images. Experimental results demonstrate that our scheme achieves satisfactory refocusing performance.