论文标题
部分监督的多任务学习
Partly Supervised Multitask Learning
论文作者
论文摘要
半监督的学习最近吸引了人们的注意,以替代需要大量标记数据的完全监督模型。此外,优化多个任务的模型可以提供比单任务学习更好的通用性。利用自欺欺人和对抗性训练,我们提出了一种新颖的通用性半手,多任务模型 - 即,自我监督,半手审判,多任务学习(S $^4 $ MTL)---用于在医学成像,分割,分割,分割,分割,分解和诊断分类中完成两项重要任务。胸部和脊柱X射线数据集的实验结果表明,我们的S $^4 $ MTL模型明显优于半监督单个任务,半/完全监督的多任务和完全监督的单个任务模型,即使降低了上课和分割标签的50 \%。我们假设我们提出的模型可以有效解决有限的注释问题,不仅在医学成像领域,而且在通用视觉任务中。
Semi-supervised learning has recently been attracting attention as an alternative to fully supervised models that require large pools of labeled data. Moreover, optimizing a model for multiple tasks can provide better generalizability than single-task learning. Leveraging self-supervision and adversarial training, we propose a novel general purpose semi-supervised, multiple-task model---namely, self-supervised, semi-supervised, multitask learning (S$^4$MTL)---for accomplishing two important tasks in medical imaging, segmentation and diagnostic classification. Experimental results on chest and spine X-ray datasets suggest that our S$^4$MTL model significantly outperforms semi-supervised single task, semi/fully-supervised multitask, and fully-supervised single task models, even with a 50\% reduction of class and segmentation labels. We hypothesize that our proposed model can be effective in tackling limited annotation problems for joint training, not only in medical imaging domains, but also for general-purpose vision tasks.