论文标题
让我们同意学位:比较消息通讯框架中的图形卷积网络
Let's Agree to Degree: Comparing Graph Convolutional Networks in the Message-Passing Framework
论文作者
论文摘要
在本文中,我们将在图表上定义为消息的神经网络(MPNN)的神经网络施放,以研究此类模型的不同类别的区别能力。我们对某些体系结构是否能够根据图形输入给出的功能标签分开判断顶点。我们考虑MPNN的两个变体:匿名MPNN的消息函数仅取决于所涉及的顶点。消息感知的MPNN可以在其中额外使用有关顶点程度的信息。前类涵盖了图形上计算功能的流行形式主义:图神经网络(GNN)。后者涵盖了所谓的图形卷积网络(GCN),这是KIPF和Welling最近引入的GNN变体。根据Weisfeiler-Lehman(WL)算法的区别能力,我们获得了MPNN的区别能力的下限和上限。我们的结果表明(i)GCN的区别能力是由WL算法界定的,但它们是领先的一步; (ii)WL算法无法通过“普通香草” GCN模拟,但是在顶点的特征与其邻居的特征(如KIPF和Welling本身提出的)之间增加了权衡参数,可以解决这个问题。
In this paper we cast neural networks defined on graphs as message-passing neural networks (MPNNs) in order to study the distinguishing power of different classes of such models. We are interested in whether certain architectures are able to tell vertices apart based on the feature labels given as input with the graph. We consider two variants of MPNNS: anonymous MPNNs whose message functions depend only on the labels of vertices involved; and degree-aware MPNNs in which message functions can additionally use information regarding the degree of vertices. The former class covers a popular formalisms for computing functions on graphs: graph neural networks (GNN). The latter covers the so-called graph convolutional networks (GCNs), a recently introduced variant of GNNs by Kipf and Welling. We obtain lower and upper bounds on the distinguishing power of MPNNs in terms of the distinguishing power of the Weisfeiler-Lehman (WL) algorithm. Our results imply that (i) the distinguishing power of GCNs is bounded by the WL algorithm, but that they are one step ahead; (ii) the WL algorithm cannot be simulated by "plain vanilla" GCNs but the addition of a trade-off parameter between features of the vertex and those of its neighbours (as proposed by Kipf and Welling themselves) resolves this problem.