论文标题
带有复发图神经网络的学习图算法
Learning Graph Algorithms With Recurrent Graph Neural Networks
论文作者
论文摘要
经典图算法对于可以彻底形式化和抽象的组合问题非常有效。一旦得出算法,它将概括为任何大小的实例。但是,开发一种处理现实世界中复杂结构和相互作用的算法可能具有挑战性。我们可以尝试从图形结构的数据中学习它,而不是指定算法。图神经网络(GNN)固有地能够在图形结构上工作。但是,他们努力概括地概括,并且在更大的实例上学习是具有挑战性的。为了扩展规模,我们专注于一个经常性的体系结构设计,该设计可以学习简单的图形问题,以较小的图表结束,然后推断出较大的实例。作为我们的主要贡献,我们确定了复发性GNN的三种基本技术。通过使用(i)跳过连接,(ii)状态正则化和(iii)边缘卷积,我们可以引导GNNS推断。这使我们能够在小图上训练,并在推断过程中将相同的模型应用于更大的图形。此外,我们从经验上验证了GNNS在算法数据集上的外推能力。
Classical graph algorithms work well for combinatorial problems that can be thoroughly formalized and abstracted. Once the algorithm is derived, it generalizes to instances of any size. However, developing an algorithm that handles complex structures and interactions in the real world can be challenging. Rather than specifying the algorithm, we can try to learn it from the graph-structured data. Graph Neural Networks (GNNs) are inherently capable of working on graph structures; however, they struggle to generalize well, and learning on larger instances is challenging. In order to scale, we focus on a recurrent architecture design that can learn simple graph problems end to end on smaller graphs and then extrapolate to larger instances. As our main contribution, we identify three essential techniques for recurrent GNNs to scale. By using (i) skip connections, (ii) state regularization, and (iii) edge convolutions, we can guide GNNs toward extrapolation. This allows us to train on small graphs and apply the same model to much larger graphs during inference. Moreover, we empirically validate the extrapolation capabilities of our GNNs on algorithmic datasets.