论文标题

通过分布式重播,用于持续学习的多头模型

A Multi-Head Model for Continual Learning via Out-of-Distribution Replay

论文作者

Kim, Gyuhak, Ke, Zixuan, Liu, Bing

论文摘要

本文研究持续学习(CL)的逐步学习(CIL)。已经提出了许多方法来处理CIL中的灾难性遗忘(CF)。大多数方法都会为单个头网络中所有任务的所有类别构建单个分类器。为了防止CF,一种流行的方法是记住以前任务中的少量样本,并在培训新任务时重播它们。但是,这种方法仍然患有严重的CF,因为仅使用存储器中保存的样本数量有限,对先前任务的参数进行了更新或调整。本文提出了一种完全不同的方法,该方法使用变压器网络为每个任务(称为多头模型)构建一个单独的分类器(头部),称为更多。与其在内存中使用保存的样本在现有方法中更新以前的任务/类的网络,不如利用保存的样本来构建特定任务分类器(添加新的分类头),而无需更新用于先前任务/类的网络。新任务的模型经过培训,可以学习任务的类别,并且还检测到不是从同一数据分布(即,任务分布(OOD))的样本。这使测试实例所属的任务的分类器可以为正确的类产生高分,而其他任务的分类器可以产生低分,因为测试实例不是来自这些分类器的数据分布。实验结果表明,更多的表现要优于最先进的基线,并且自然能够在持续学习环境中进行OOD检测。

This paper studies class incremental learning (CIL) of continual learning (CL). Many approaches have been proposed to deal with catastrophic forgetting (CF) in CIL. Most methods incrementally construct a single classifier for all classes of all tasks in a single head network. To prevent CF, a popular approach is to memorize a small number of samples from previous tasks and replay them during training of the new task. However, this approach still suffers from serious CF as the parameters learned for previous tasks are updated or adjusted with only the limited number of saved samples in the memory. This paper proposes an entirely different approach that builds a separate classifier (head) for each task (called a multi-head model) using a transformer network, called MORE. Instead of using the saved samples in memory to update the network for previous tasks/classes in the existing approach, MORE leverages the saved samples to build a task specific classifier (adding a new classification head) without updating the network learned for previous tasks/classes. The model for the new task in MORE is trained to learn the classes of the task and also to detect samples that are not from the same data distribution (i.e., out-of-distribution (OOD)) of the task. This enables the classifier for the task to which the test instance belongs to produce a high score for the correct class and the classifiers of other tasks to produce low scores because the test instance is not from the data distributions of these classifiers. Experimental results show that MORE outperforms state-of-the-art baselines and is also naturally capable of performing OOD detection in the continual learning setting.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源