论文标题

先前的引导功能丰富网络,用于几个片段

Prior Guided Feature Enrichment Network for Few-Shot Segmentation

论文作者

Tian, Zhuotao, Zhao, Hengshuang, Shu, Michelle, Yang, Zhicheng, Li, Ruiyu, Jia, Jiaya

论文摘要

最先进的语义分割方法需要足够的标记数据,以取得良好的结果,几乎无法在没有微调的情况下进行看不见的类别。因此,很少有人提议通过学习一个模型,该模型迅速适应了一些标记的支持样本,以解决这个问题。由于不适当使用培训类别的高级语义信息以及查询目标和支持目标之间的空间不一致,这些框架仍然面临着降低看不见类的概括能力的挑战。为了减轻这些问题,我们提出了先前的指导功能富集网络(PFENET)。它由(1)一种无训练的先前掩模生成方法组成,该方法不仅保留了概括功率,还可以改善模型性能和(2)特征富集模块(FEM),该模块(FEM)通过通过支持特征和先验口罩的自适应丰富查询功能来克服空间不一致。 Pascal-5 $^i $和可可的广泛实验证明,提出的前一代方法和FEM都显着改善了基线方法。我们的Pfenet还以较大的利润率胜过最先进的方法而不会损失效率。令人惊讶的是,我们的模型甚至将其推广到没有标记的支持样本的情况下。我们的代码可在https://github.com/jia-research-lab/pfenet/上找到。

State-of-the-art semantic segmentation methods require sufficient labeled data to achieve good results and hardly work on unseen classes without fine-tuning. Few-shot segmentation is thus proposed to tackle this problem by learning a model that quickly adapts to new classes with a few labeled support samples. Theses frameworks still face the challenge of generalization ability reduction on unseen classes due to inappropriate use of high-level semantic information of training classes and spatial inconsistency between query and support targets. To alleviate these issues, we propose the Prior Guided Feature Enrichment Network (PFENet). It consists of novel designs of (1) a training-free prior mask generation method that not only retains generalization power but also improves model performance and (2) Feature Enrichment Module (FEM) that overcomes spatial inconsistency by adaptively enriching query features with support features and prior masks. Extensive experiments on PASCAL-5$^i$ and COCO prove that the proposed prior generation method and FEM both improve the baseline method significantly. Our PFENet also outperforms state-of-the-art methods by a large margin without efficiency loss. It is surprising that our model even generalizes to cases without labeled support samples. Our code is available at https://github.com/Jia-Research-Lab/PFENet/.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源