论文标题
AAG:通过辅助增强和GNT-Xent损失的自我监督的代表性学习
AAG: Self-Supervised Representation Learning by Auxiliary Augmentation with GNT-Xent Loss
论文作者
论文摘要
自我监督的代表性学习是一个新兴的研究主题,因为它通过未标记的数据在学习方面具有强大的学习能力。作为一种主流的自我监督学习方法,基于增强的对比学习在缺乏手动注释的各种计算机视觉任务中取得了巨大成功。尽管有目前的进展,但现有方法通常受到内存或存储的额外成本的限制,而且它们的性能仍然有很大的改进空间。在这里,我们提出了一种自我监督的表示学习方法,即AAG,该方法由辅助增强策略和GNT Xent损失。辅助增强功能能够通过增加图像的多样性来促进对比度学习的表现。拟议的GNT Xent损失可以实现稳定而快速的训练过程,并产生竞争精度。实验结果证明了AAG对CIFAR10,CIFAR100和SVHN上先前最新方法的优势。特别是,AAG在CIFAR10上获得了94.5%的TOP-1精度,其批量尺寸为64,比SIMCLR的最佳结果高0.5%。
Self-supervised representation learning is an emerging research topic for its powerful capacity in learning with unlabeled data. As a mainstream self-supervised learning method, augmentation-based contrastive learning has achieved great success in various computer vision tasks that lack manual annotations. Despite current progress, the existing methods are often limited by extra cost on memory or storage, and their performance still has large room for improvement. Here we present a self-supervised representation learning method, namely AAG, which is featured by an auxiliary augmentation strategy and GNT-Xent loss. The auxiliary augmentation is able to promote the performance of contrastive learning by increasing the diversity of images. The proposed GNT-Xent loss enables a steady and fast training process and yields competitive accuracy. Experiment results demonstrate the superiority of AAG to previous state-of-the-art methods on CIFAR10, CIFAR100, and SVHN. Especially, AAG achieves 94.5% top-1 accuracy on CIFAR10 with batch size 64, which is 0.5% higher than the best result of SimCLR with batch size 1024.