论文标题

CFA:基于约束的芬太尼方法,用于概括的少数对象检测

CFA: Constraint-based Finetuning Approach for Generalized Few-Shot Object Detection

论文作者

Guirguis, Karim, Hendawy, Ahmed, Eskandar, George, Abdelsamad, Mohamed, Kayser, Matthias, Beyerer, Juergen

论文摘要

很少有射击对象检测(FSOD)试图通过利用丰富基本数据的先验知识来检测具有有限数据的新类别。概括的几个对象检测(G-FSOD)旨在解决FSOD,而无需忘记先前看到的基础类,因此可以说明更现实的场景,其中两个类都在测试时间遇到。尽管当前的FSOD方法遭受了灾难性遗忘的困扰,但G-FSOD解决了这一限制,但与最先进的FSOD相比,新任务的性能下降。在这项工作中,我们提出了一种基于约束的填充方法(CFA)来减轻灾难性的遗忘,同时在不增加模型容量的情况下在新任务上取得了竞争成果。 CFA适应了一种持续的学习方法,即平均梯度情节记忆(A-GEM)对G-fsod。具体而言,对梯度搜索策略的更多限制是从中得出新的梯度更新规则的,从而可以在基础和新颖类之间进行更好的知识交换。为了评估我们的方法,我们对MS-Coco和Pascal-Voc数据集进行了广泛的实验。我们的方法的表现优于当前的FSOD和G-FSOD方法,在新任务上,基本任务的次要变性。此外,CFA与FSOD方法是正交的,并且可以作为插件模块运行,而无需增加模型容量或推理时间。

Few-shot object detection (FSOD) seeks to detect novel categories with limited data by leveraging prior knowledge from abundant base data. Generalized few-shot object detection (G-FSOD) aims to tackle FSOD without forgetting previously seen base classes and, thus, accounts for a more realistic scenario, where both classes are encountered during test time. While current FSOD methods suffer from catastrophic forgetting, G-FSOD addresses this limitation yet exhibits a performance drop on novel tasks compared to the state-of-the-art FSOD. In this work, we propose a constraint-based finetuning approach (CFA) to alleviate catastrophic forgetting, while achieving competitive results on the novel task without increasing the model capacity. CFA adapts a continual learning method, namely Average Gradient Episodic Memory (A-GEM) to G-FSOD. Specifically, more constraints on the gradient search strategy are imposed from which a new gradient update rule is derived, allowing for better knowledge exchange between base and novel classes. To evaluate our method, we conduct extensive experiments on MS-COCO and PASCAL-VOC datasets. Our method outperforms current FSOD and G-FSOD approaches on the novel task with minor degeneration on the base task. Moreover, CFA is orthogonal to FSOD approaches and operates as a plug-and-play module without increasing the model capacity or inference time.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源