论文标题
RBC:在持续的语义分段中纠正有偏见的上下文
RBC: Rectifying the Biased Context in Continual Semantic Segmentation
论文作者
论文摘要
近年来,在语义细分中见证了卷积神经网络的巨大发展,同时可用所有类别的培训图像。在实践中,通常以连续的方式提供新图像,导致一个称为连续语义分割(CSS)的问题。通常,CSS面临遗忘问题,因为以前的训练图像不可用,并且背景类别的语义转移问题。将语义细分视为与上下文相关的像素级分类任务,我们从本文的上下文分析的新角度探讨了CSS。我们观察到,在新图像中,老式像素的上下文在新类中比旧图像中的偏差要多得多,这可能会严重加剧老级遗忘和新级别的过度拟合。为了应对障碍,我们提出了一个有偏见的统治CSS框架,具有上下文校正的图像功能学习方案和有偏见的内在不敏感的一致性损失。此外,我们为有偏见的班级分布提出了一种自适应重新加权级别平衡的学习策略。在现有的CSS方案中,我们的方法优于最先进的方法。
Recent years have witnessed a great development of Convolutional Neural Networks in semantic segmentation, where all classes of training images are simultaneously available. In practice, new images are usually made available in a consecutive manner, leading to a problem called Continual Semantic Segmentation (CSS). Typically, CSS faces the forgetting problem since previous training images are unavailable, and the semantic shift problem of the background class. Considering the semantic segmentation as a context-dependent pixel-level classification task, we explore CSS from a new perspective of context analysis in this paper. We observe that the context of old-class pixels in the new images is much more biased on new classes than that in the old images, which can sharply aggravate the old-class forgetting and new-class overfitting. To tackle the obstacle, we propose a biased-context-rectified CSS framework with a context-rectified image-duplet learning scheme and a biased-context-insensitive consistency loss. Furthermore, we propose an adaptive re-weighting class-balanced learning strategy for the biased class distribution. Our approach outperforms state-of-the-art methods by a large margin in existing CSS scenarios.