论文标题

通过递归梯度优化持续学习

Continual Learning with Recursive Gradient Optimization

论文作者

Liu, Hao, Liu, Huaping

论文摘要

依次学习多个任务而没有忘记以前的知识(CL),仍然是神经网络的长期挑战。大多数现有方法依赖于其他网络容量或数据重播。相比之下,我们引入了一种新型方法,我们将其称为递归梯度优化(RGO)。 RGO由迭代更新的优化器组成,该优化器可修改梯度以最大程度地减少忘记而无需数据重播,而虚拟特征编码层(FEL)代表只有任务描述符的不同长期结构。实验表明,与基准相比,RGO在流行的持续分类基准上的性能明显更好,并且在20速度-CIFAR100(82.22%)和20阶段的Miniimagenet(72.63%)上实现了新的最先进的性能。该方法具有比单任务学习(STL)更高的平均精度(STL),可灵活且可靠,可以为依赖梯度下降的学习模型提供持续的学习能力。

Learning multiple tasks sequentially without forgetting previous knowledge, called Continual Learning(CL), remains a long-standing challenge for neural networks. Most existing methods rely on additional network capacity or data replay. In contrast, we introduce a novel approach which we refer to as Recursive Gradient Optimization(RGO). RGO is composed of an iteratively updated optimizer that modifies the gradient to minimize forgetting without data replay and a virtual Feature Encoding Layer(FEL) that represents different long-term structures with only task descriptors. Experiments demonstrate that RGO has significantly better performance on popular continual classification benchmarks when compared to the baselines and achieves new state-of-the-art performance on 20-split-CIFAR100(82.22%) and 20-split-miniImageNet(72.63%). With higher average accuracy than Single-Task Learning(STL), this method is flexible and reliable to provide continual learning capabilities for learning models that rely on gradient descent.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源