论文标题

典型的四曲线,用于几个阶级的增量学习

Prototypical quadruplet for few-shot class incremental learning

论文作者

Palit, Sanchar, Banerjee, Biplab, Chaudhuri, Subhasis

论文摘要

对于许多现代计算机视觉算法,数据的稀缺和新任务的增量学习构成了两个主要的瓶颈。灾难性遗忘的现象,即,该模型在使用新数据批次培训后无法对先前学习的数据进行分类,这是一个主要的挑战。传统方法解决了灾难性的遗忘,同时损害了当前会议的培训。已经提出了基于生成重播的方法,例如生成对抗网络(GAN),以减轻灾难性的遗忘,但是训练很少的样品的训练可能会导致不稳定。为了应对这些挑战,我们提出了一种新颖的方法,该方法通过使用改善的对比损失来确定更好的嵌入空间来改善分类的鲁棒性。我们的方法通过更新以前的会话类原型来表示真实的类均值,即使接受了新课程的培训,我们的方法仍保留在嵌入空间中获得的知识,这对于我们最近的类平均分类策略至关重要。我们通过证明嵌入空间在训练模型的新阶段并在不同会话之间的准确性方面胜过现有的最新算法后,表明了我们方法的有效性。

Scarcity of data and incremental learning of new tasks pose two major bottlenecks for many modern computer vision algorithms. The phenomenon of catastrophic forgetting, i.e., the model's inability to classify previously learned data after training with new batches of data, is a major challenge. Conventional methods address catastrophic forgetting while compromising the current session's training. Generative replay-based approaches, such as generative adversarial networks (GANs), have been proposed to mitigate catastrophic forgetting, but training GANs with few samples may lead to instability. To address these challenges, we propose a novel method that improves classification robustness by identifying a better embedding space using an improved contrasting loss. Our approach retains previously acquired knowledge in the embedding space, even when trained with new classes, by updating previous session class prototypes to represent the true class mean, which is crucial for our nearest class mean classification strategy. We demonstrate the effectiveness of our method by showing that the embedding space remains intact after training the model with new classes and outperforms existing state-of-the-art algorithms in terms of accuracy across different sessions.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源