论文标题
自训练用于课堂开展语义细分
Self-Training for Class-Incremental Semantic Segmentation
论文作者
论文摘要
在课堂开展的语义细分中,我们无法访问先前任务的标记数据。因此,当逐步学习新课程时,深层神经网络遭受了灾难性忘记以前学习的知识。为了解决这个问题,我们建议采用一种利用未标记数据的自我训练方法,该方法用于排练以前的知识。具体而言,我们首先学习了当前任务的临时模型,然后通过从上一个任务的旧模型和当前临时模型中融合信息来计算未标记数据的伪标签。此外,提出了减少冲突来解决旧模型和临时模型产生的伪标签的冲突。我们表明,通过平滑过度自信的预测,最大化的自我注入可以进一步改善结果。有趣的是,在实验中,我们表明辅助数据可能与培训数据不同,即使是多样化的辅助数据也可能导致巨大的性能增长。该实验证明了最新的结果:与以前的最新方法相比,在Pascal-VOC 2012上获得的相对增益高达114%,而在更具挑战性的ADE20K上获得了8.5%。
In class-incremental semantic segmentation, we have no access to the labeled data of previous tasks. Therefore, when incrementally learning new classes, deep neural networks suffer from catastrophic forgetting of previously learned knowledge. To address this problem, we propose to apply a self-training approach that leverages unlabeled data, which is used for the rehearsal of previous knowledge. Specifically, we first learn a temporary model for the current task, and then pseudo labels for the unlabeled data are computed by fusing information from the old model of the previous task and the current temporary model. Additionally, conflict reduction is proposed to resolve the conflicts of pseudo labels generated from both the old and temporary models. We show that maximizing self-entropy can further improve results by smoothing the overconfident predictions. Interestingly, in the experiments we show that the auxiliary data can be different from the training data and that even general-purpose but diverse auxiliary data can lead to large performance gains. The experiments demonstrate state-of-the-art results: obtaining a relative gain of up to 114% on Pascal-VOC 2012 and 8.5% on the more challenging ADE20K compared to previous state-of-the-art methods.