论文标题
一个新的知识蒸馏网络,用于增量少量表面缺陷检测
A New Knowledge Distillation Network for Incremental Few-Shot Surface Defect Detection
论文作者
论文摘要
表面缺陷检测是工业质量检查最重要的过程之一。基于深度学习的表面缺陷检测方法已显示出巨大的潜力。但是,表现良好的模型通常需要大量的训练数据,并且只能检测出在训练阶段出现的缺陷。当面对少量数据数据时,缺陷检测模型不可避免地会遭受灾难性的遗忘和错误分类问题。为了解决这些问题,本文提出了一个新的知识蒸馏网络,称为双知识对齐网络(DKAN)。提出的DKAN方法遵循预处理转移学习范式,并设计了用于微调的知识蒸馏框架。具体而言,提出了增量RCNN以实现不同类别的分离稳定特征表示。在此框架下,设计特征知识对齐(FKA)的损失是在类不足的特征图之间设计的,以解决灾难性的遗忘问题,而logit知识对准(LKA)损失在logit分布之间部署以解决错误分类问题。实验已经在递增的几个neu-det数据集上进行,结果表明,DKAN在各种几个场景上的其他方法都优于其他方法,对平均平均精度度量指标最高可达6.65%,这证明了该方法的有效性。
Surface defect detection is one of the most essential processes for industrial quality inspection. Deep learning-based surface defect detection methods have shown great potential. However, the well-performed models usually require large training data and can only detect defects that appeared in the training stage. When facing incremental few-shot data, defect detection models inevitably suffer from catastrophic forgetting and misclassification problem. To solve these problems, this paper proposes a new knowledge distillation network, called Dual Knowledge Align Network (DKAN). The proposed DKAN method follows a pretraining-finetuning transfer learning paradigm and a knowledge distillation framework is designed for fine-tuning. Specifically, an Incremental RCNN is proposed to achieve decoupled stable feature representation of different categories. Under this framework, a Feature Knowledge Align (FKA) loss is designed between class-agnostic feature maps to deal with catastrophic forgetting problems, and a Logit Knowledge Align (LKA) loss is deployed between logit distributions to tackle misclassification problems. Experiments have been conducted on the incremental Few-shot NEU-DET dataset and results show that DKAN outperforms other methods on various few-shot scenes, up to 6.65% on the mean Average Precision metric, which proves the effectiveness of the proposed method.