论文标题

GraphCl:图表表示对比的自我监督学习

GraphCL: Contrastive Self-Supervised Learning of Graph Representations

论文作者

Hafidi, Hakim, Ghogho, Mounir, Ciblat, Philippe, Swami, Ananthram

论文摘要

我们提出了图形对比学习(GraphCl),这是以自我监督方式学习节点表示的一般框架。 GraphCl通过最大化两个随机扰动版本的内在特征的随机版本和同一节点的局部局部子图的链接结构之间的相似性来学习节点嵌入。我们使用图形神经网络来产生相同节点的两个表示形式,并利用对比度学习损失,以最大程度地提高它们之间的一致性。在跨性学习和归纳学习设置中,我们证明我们的方法在许多节点分类基准上的无监督学习中的最先进都大大优于最先进的方法。

We propose Graph Contrastive Learning (GraphCL), a general framework for learning node representations in a self supervised manner. GraphCL learns node embeddings by maximizing the similarity between the representations of two randomly perturbed versions of the intrinsic features and link structure of the same node's local subgraph. We use graph neural networks to produce two representations of the same node and leverage a contrastive learning loss to maximize agreement between them. In both transductive and inductive learning setups, we demonstrate that our approach significantly outperforms the state-of-the-art in unsupervised learning on a number of node classification benchmarks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源