论文标题
神经信息传递的探路者发现网络
Pathfinder Discovery Networks for Neural Message Passing
论文作者
论文摘要
在这项工作中,我们提出了Pathfinder Discovery网络(PDNS),这是一种通过下游半监督模型在多路复用网络上共同学习消息传递图的方法。 PDNS诱导地学习每个边缘的汇总重量,优化,以为下游学习任务带来最佳结果。 PDN是图表上注意机制的概括,可以在节点,边缘卷积和廉价的多尺度混合层之间灵活构建相似性功能。我们表明,PDN克服了现有方法的图形注意力(例如图形注意网络),例如减轻重量问题。我们的实验结果表明,在学术节点分类任务上具有竞争性预测性能。挑战性的节点分类实验套件的其他结果表明,与现有基准相比,PDN如何学习更广泛的功能。我们分析了PDNS的相对计算复杂性,并表明PDN运行时不高于静态刻画模型。最后,我们讨论如何使用PDN来构建一种易于解释的注意力机制,从而允许用户了解图中的信息传播。
In this work we propose Pathfinder Discovery Networks (PDNs), a method for jointly learning a message passing graph over a multiplex network with a downstream semi-supervised model. PDNs inductively learn an aggregated weight for each edge, optimized to produce the best outcome for the downstream learning task. PDNs are a generalization of attention mechanisms on graphs which allow flexible construction of similarity functions between nodes, edge convolutions, and cheap multiscale mixing layers. We show that PDNs overcome weaknesses of existing methods for graph attention (e.g. Graph Attention Networks), such as the diminishing weight problem. Our experimental results demonstrate competitive predictive performance on academic node classification tasks. Additional results from a challenging suite of node classification experiments show how PDNs can learn a wider class of functions than existing baselines. We analyze the relative computational complexity of PDNs, and show that PDN runtime is not considerably higher than static-graph models. Finally, we discuss how PDNs can be used to construct an easily interpretable attention mechanism that allows users to understand information propagation in the graph.