论文标题

通过汇总增强学习图表学习

Graph Representation Learning via Aggregation Enhancement

论文作者

Fishman, Maxim, Baskin, Chaim, Zheltonozhskii, Evgenii, David, Almog, Banner, Ron, Mendelson, Avi

论文摘要

图形神经网络(GNN)已成为处理图形结构数据的强大工具,但在有效地汇总和传播层之间的信息方面仍然面临挑战,这限制了其性能。我们使用内核回归方法(KR)方法解决了这个问题,使用KR损失作为自我监管的设置中的主要损失或在监督环境中作为正规化术语。在两种情况下,我们在多个跨传导性和归纳节点分类数据集(尤其是对于深层网络中)中的最新性能提高了性能的实质性改进。与相互信息(MI)相反,在高维情况下,KR损失是凸流且易于估计的,即使它间接最大化了其输入之间的MI。我们的工作突出了KR推进图表学习领域并增强GNN的性能的潜力。重现我们的实验的代码可在https://github.com/anonymons1252022/kr_for_gnns上获得

Graph neural networks (GNNs) have become a powerful tool for processing graph-structured data but still face challenges in effectively aggregating and propagating information between layers, which limits their performance. We tackle this problem with the kernel regression (KR) approach, using KR loss as the primary loss in self-supervised settings or as a regularization term in supervised settings. We show substantial performance improvements compared to state-of-the-art in both scenarios on multiple transductive and inductive node classification datasets, especially for deep networks. As opposed to mutual information (MI), KR loss is convex and easy to estimate in high-dimensional cases, even though it indirectly maximizes the MI between its inputs. Our work highlights the potential of KR to advance the field of graph representation learning and enhance the performance of GNNs. The code to reproduce our experiments is available at https://github.com/Anonymous1252022/KR_for_GNNs

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源