论文标题
从本地到全球:光谱启发的图形神经网络
From Local to Global: Spectral-Inspired Graph Neural Networks
论文作者
论文摘要
图神经网络(GNN)是非欧盟数据的强大深度学习方法。流行的GNN是通过消息通讯算法(MPNN)在本地图中汇总并组合信号。但是,浅的mpnns倾向于错过远程信号,并且在某些异性图上表现较差,而深度mpnns可能会遭受过度平滑或过度施加的问题。为了减轻此类问题,现有的工作通常会从欧几里得数据上训练神经网络或修改图形结构中借用归一化技术。然而,这些方法在理论上并不是很好地理解,并且可能会增加整体计算复杂性。在这项工作中,我们从光谱图嵌入中汲取灵感,并提出$ \ texttt {powerembed} $ - 一种简单的层次正常化技术来增强mpnns。我们显示$ \ texttt {powerembed} $可以证明表达图形运算符的顶部$ k $引导特征向视频,该电视传播者可以防止过度光滑,并且对图形拓扑是不可知的;同时,它产生了从本地功能到全球信号的表示列表,避免了过度阵列。我们将$ \ texttt {powerembed} $应用于各种模拟和真实图表,并展示其竞争性能,尤其是对于异性图。
Graph Neural Networks (GNNs) are powerful deep learning methods for Non-Euclidean data. Popular GNNs are message-passing algorithms (MPNNs) that aggregate and combine signals in a local graph neighborhood. However, shallow MPNNs tend to miss long-range signals and perform poorly on some heterophilous graphs, while deep MPNNs can suffer from issues like over-smoothing or over-squashing. To mitigate such issues, existing works typically borrow normalization techniques from training neural networks on Euclidean data or modify the graph structures. Yet these approaches are not well-understood theoretically and could increase the overall computational complexity. In this work, we draw inspirations from spectral graph embedding and propose $\texttt{PowerEmbed}$ -- a simple layer-wise normalization technique to boost MPNNs. We show $\texttt{PowerEmbed}$ can provably express the top-$k$ leading eigenvectors of the graph operator, which prevents over-smoothing and is agnostic to the graph topology; meanwhile, it produces a list of representations ranging from local features to global signals, which avoids over-squashing. We apply $\texttt{PowerEmbed}$ in a wide range of simulated and real graphs and demonstrate its competitive performance, particularly for heterophilous graphs.