论文标题

在少量类增量学习的软纸网上

On the Soft-Subnetwork for Few-shot Class Incremental Learning

论文作者

Kang, Haeyong, Yoon, Jaehong, Madjid, Sultan Rizky Hikmawan, Hwang, Sung Ju, Yoo, Chang D.

论文摘要

灵感来自正规彩票假设(RLTH),该假设假设在密集的网络中存在光滑的(非二进制)子网,以实现密集网络的竞争性能,我们提出了几种弹射类增量学习(FSCIL)方法,称为\ emph {soft-bubnetworks(soft-bubnetworks(soft-bubnetworks)(soft-nettworks(soft-softnet))}。我们的目标是逐步学习一系列会议,每个会议在每个课程中只包括一些培训实例,同时保留了先前学到的知识。软网络在基本训练会议上共同学习模型的权重和自适应非二进制软面具,每个面具由主要和次要子网组成。前者的目的是最大程度地减少训练期间的灾难性遗忘,而后者旨在避免在每个新培训课程中过度适合一些样本。我们提供了全面的经验验证,表明我们的软网络通过超越基准数据集的最先进基准的性能来有效地解决了几个射击的学习问题。

Inspired by Regularized Lottery Ticket Hypothesis (RLTH), which hypothesizes that there exist smooth (non-binary) subnetworks within a dense network that achieve the competitive performance of the dense network, we propose a few-shot class incremental learning (FSCIL) method referred to as \emph{Soft-SubNetworks (SoftNet)}. Our objective is to learn a sequence of sessions incrementally, where each session only includes a few training instances per class while preserving the knowledge of the previously learned ones. SoftNet jointly learns the model weights and adaptive non-binary soft masks at a base training session in which each mask consists of the major and minor subnetwork; the former aims to minimize catastrophic forgetting during training, and the latter aims to avoid overfitting to a few samples in each new training session. We provide comprehensive empirical validations demonstrating that our SoftNet effectively tackles the few-shot incremental learning problem by surpassing the performance of state-of-the-art baselines over benchmark datasets.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源