论文标题

视觉变压器用于对比聚类

Vision Transformer for Contrastive Clustering

论文作者

Ling, Hua-Bao, Zhu, Bowen, Huang, Dong, Chen, Ding-Hua, Wang, Chang-Dong, Lai, Jian-Huang

论文摘要

Vision Transformer(VIT)表明了其优于卷积神经网络(CNN)的优势,其能够捕获全球远程依赖性以进行视觉表示学习。除了VIT,对比度学习是最近的另一个流行研究主题。虽然以前的对比学习作品主要基于CNN,但一些最近的研究试图将VIT和对比度学习结合起来,以增强自我监督的学习。尽管取得了很大的进步,但这些VIT和对比学习的组合主要集中在实例级对比度上,这些对比度通常忽略了全球对比度,并且缺乏直接学习聚类结果(例如图像)的能力。鉴于这一点,本文提出了一种新颖的深度聚类方法,称为“视觉变压器”的对比聚类(VTCC),据我们所知,该方法首次统一了变压器和图像集群任务的对比度学习。具体而言,在每个图像上执行了两个随机增强,我们利用具有两个重量分担视图的VIT编码器作为骨干。为了纠正VIT的潜在不稳定,我们将卷积茎融合在一起,将每个增强样品拆分为一系列斑块,该斑点使用多个堆叠的小卷积,而不是斑块投影层中的大卷积。通过通过主链学习补丁序列的功能表示,实例投影仪和集群投影仪将进一步用于执行实例级的对比度学习和全局聚类结构学习。八个图像数据集上的实验证明了我们VTCC方法比最先进的方法的稳定性(在训练中)和优越性(在聚类性能中)。

Vision Transformer (ViT) has shown its advantages over the convolutional neural network (CNN) with its ability to capture global long-range dependencies for visual representation learning. Besides ViT, contrastive learning is another popular research topic recently. While previous contrastive learning works are mostly based on CNNs, some recent studies have attempted to combine ViT and contrastive learning for enhanced self-supervised learning. Despite the considerable progress, these combinations of ViT and contrastive learning mostly focus on the instance-level contrastiveness, which often overlook the global contrastiveness and also lack the ability to directly learn the clustering result (e.g., for images). In view of this, this paper presents a novel deep clustering approach termed Vision Transformer for Contrastive Clustering (VTCC), which for the first time, to our knowledge, unifies the Transformer and the contrastive learning for the image clustering task. Specifically, with two random augmentations performed on each image, we utilize a ViT encoder with two weight-sharing views as the backbone. To remedy the potential instability of the ViT, we incorporate a convolutional stem to split each augmented sample into a sequence of patches, which uses multiple stacked small convolutions instead of a big convolution in the patch projection layer. By learning the feature representations for the sequences of patches via the backbone, an instance projector and a cluster projector are further utilized to perform the instance-level contrastive learning and the global clustering structure learning, respectively. Experiments on eight image datasets demonstrate the stability (during the training-from-scratch) and the superiority (in clustering performance) of our VTCC approach over the state-of-the-art.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源