论文标题

神经化有效的高阶信仰传播

Neuralizing Efficient Higher-order Belief Propagation

论文作者

Dupty, Mohammed Haroon, Lee, Wee Sun

论文摘要

图形神经网络模型已广泛用于学习端到端设置中图结构数据的节点表示。这些模型通常依赖于光谱图卷积的局部一阶近似值,因此无法捕获节点之间的高阶关系信息。概率图形模型构成了另一种类别的模型,这些模型在合并此类关系信息时提供了丰富的灵活性,但受到更高效率的推理算法的限制。在本文中,我们建议将这些方法结合在一起,以学习更好的节点和图表表示。首先,我们为高阶PGM提供了有效的近似概述求和产物信念传播算法。然后,我们将消息传递更新嵌入到神经网络中,以在端到端学习中提供推理算法的归纳偏差。这为我们提供了一个足够灵活的模型,可以在保持计算优势的同时容纳领域知识。我们进一步提出了构建基于节点和边缘特征的高阶因子的方法,并在必要时共享参数。我们的实验评估表明,我们的模型确实捕获了高阶信息,在分子数据集中大大优于最先进的$ k $ ordor-rorder图神经网络。

Graph neural network models have been extensively used to learn node representations for graph structured data in an end-to-end setting. These models often rely on localized first order approximations of spectral graph convolutions and hence are unable to capture higher-order relational information between nodes. Probabilistic Graphical Models form another class of models that provide rich flexibility in incorporating such relational information but are limited by inefficient approximate inference algorithms at higher order. In this paper, we propose to combine these approaches to learn better node and graph representations. First, we derive an efficient approximate sum-product loopy belief propagation inference algorithm for higher-order PGMs. We then embed the message passing updates into a neural network to provide the inductive bias of the inference algorithm in end-to-end learning. This gives us a model that is flexible enough to accommodate domain knowledge while maintaining the computational advantage. We further propose methods for constructing higher-order factors that are conditioned on node and edge features and share parameters wherever necessary. Our experimental evaluation shows that our model indeed captures higher-order information, substantially outperforming state-of-the-art $k$-order graph neural networks in molecular datasets.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源