论文标题

图神经网络的表现力和近似特性

Expressiveness and Approximation Properties of Graph Neural Networks

论文作者

Geerts, Floris, Reutter, Juan L.

论文摘要

表征图神经网络(GNN)的分离能力提供了对它们对图形学习任务的局限性的理解。但是,有关分离能力的结果通常适合特定的GNN架构,并且通常缺乏了解任意GNN架构的工具。我们提供了一种优雅的方式,可以轻松地从Weisfeiler-Lean(WL)测试方面轻松获得GNN的分离能力的界限,这些测试已成为测量GNNS分离能力的标准。关键是将GNN视为一种程序张量语言的表达式,描述了GNNS层中的计算。然后,通过对获得的表达式的简单分析,根据索引的数量和求和的嵌套深度,以WL检验的范围在分离能力上的界限很容易遵循。我们使用张量语言来定义较高的消息通讯神经网络(或K-MPNNS),这是MPNN的自然扩展。此外,张量语言的观点允许以自然方式推导GNN类的普遍性结果。我们的方法提供了一个工具箱,GNN架构设计师可以通过该工具箱分析其GNN的分离能力,而无需知道WL检验的复杂性。我们还提供有关提高GNN分离能力所需的见解。

Characterizing the separation power of graph neural networks (GNNs) provides an understanding of their limitations for graph learning tasks. Results regarding separation power are, however, usually geared at specific GNN architectures, and tools for understanding arbitrary GNN architectures are generally lacking. We provide an elegant way to easily obtain bounds on the separation power of GNNs in terms of the Weisfeiler-Leman (WL) tests, which have become the yardstick to measure the separation power of GNNs. The crux is to view GNNs as expressions in a procedural tensor language describing the computations in the layers of the GNNs. Then, by a simple analysis of the obtained expressions, in terms of the number of indexes and the nesting depth of summations, bounds on the separation power in terms of the WL-tests readily follow. We use tensor language to define Higher-Order Message-Passing Neural Networks (or k-MPNNs), a natural extension of MPNNs. Furthermore, the tensor language point of view allows for the derivation of universality results for classes of GNNs in a natural way. Our approach provides a toolbox with which GNN architecture designers can analyze the separation power of their GNNs, without needing to know the intricacies of the WL-tests. We also provide insights in what is needed to boost the separation power of GNNs.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源