论文标题
尖顶:多层教学和正则化的自训练边缘检测
STEdge: Self-training Edge Detection with Multi-layer Teaching and Regularization
论文作者
论文摘要
基于学习的边缘检测在这里受到了像素和像素的注释的强烈监督,这些注释乏味可以手动获得。我们研究了自我训练边缘检测的问题,利用了未开发的大型未标记图像数据集的财富。我们设计一个具有多层正则化和自我教学的自我监管框架。特别是,我们强加了一个一致性正则化,该一致性正规化强制执行从多个层中的每一层的输出,以使输入图像及其扰动的对应物保持一致。我们采用L0平滑的作为“扰动”,以鼓励在自我监督学习中的集群假设之后,在显着边界上进行边缘预测。同时,该网络通过伪标签对多层监督进行了培训,这些标签是用精美的边缘初始化的,然后随着培训的进行而被网络迭代完善。正则化和自我教学共同达到了精确和召回的良好平衡,从而在目标数据集上进行了轻巧的改进,从而实现了显着的性能增强。此外,我们的方法证明了强大的跨数据库一般性。例如,与最先进的方法相比,在看不见的双头数据集测试时,ODS的增长率为4.8%,OIS的增长率为5.8%。
Learning-based edge detection has hereunto been strongly supervised with pixel-wise annotations which are tedious to obtain manually. We study the problem of self-training edge detection, leveraging the untapped wealth of large-scale unlabeled image datasets. We design a self-supervised framework with multi-layer regularization and self-teaching. In particular, we impose a consistency regularization which enforces the outputs from each of the multiple layers to be consistent for the input image and its perturbed counterpart. We adopt L0-smoothing as the 'perturbation' to encourage edge prediction lying on salient boundaries following the cluster assumption in self-supervised learning. Meanwhile, the network is trained with multi-layer supervision by pseudo labels which are initialized with Canny edges and then iteratively refined by the network as the training proceeds. The regularization and self-teaching together attain a good balance of precision and recall, leading to a significant performance boost over supervised methods, with lightweight refinement on the target dataset. Furthermore, our method demonstrates strong cross-dataset generality. For example, it attains 4.8% improvement for ODS and 5.8% for OIS when tested on the unseen BIPED dataset, compared to the state-of-the-art methods.