论文标题

多任务UNET:在胸部X射线图像上共同提高显着性预测和疾病分类

Multi-task UNet: Jointly Boosting Saliency Prediction and Disease Classification on Chest X-ray Images

论文作者

Zhu, Hongzhi, Rohling, Robert, Salcudean, Septimiu

论文摘要

人类的视觉关注最近显示了其在增强机器学习模型中的独特能力。但是,旨在促进人类视觉关注的医疗任务的研究仍然很少。为了支持视觉注意力的使用,本文描述了一种新颖的深度学习模型,用于胸部X射线(CXR)图像的视觉显着性预测。为了应对数据缺陷,我们利用多任务学习方法并同时解决CXR的疾病分类。为了进行更强大的培训过程,我们提出了一种进一步优化的多任务学习方案,以更好地处理模型过高的模型。实验表明,使用我们的新学习方案提出的深度学习模型可以优于专门用于显着性预测或图像分类的现有方法。本文使用的代码可在https://github.com/hz-zhu/mt-unet上获得。

Human visual attention has recently shown its distinct capability in boosting machine learning models. However, studies that aim to facilitate medical tasks with human visual attention are still scarce. To support the use of visual attention, this paper describes a novel deep learning model for visual saliency prediction on chest X-ray (CXR) images. To cope with data deficiency, we exploit the multi-task learning method and tackles disease classification on CXR simultaneously. For a more robust training process, we propose a further optimized multi-task learning scheme to better handle model overfitting. Experiments show our proposed deep learning model with our new learning scheme can outperform existing methods dedicated either for saliency prediction or image classification. The code used in this paper is available at https://github.com/hz-zhu/MT-UNet.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源