论文标题
自我监督的表示和节点嵌入图形神经网络,以精确而多规模的材料分析
Self-supervised Representations and Node Embedding Graph Neural Networks for Accurate and Multi-scale Analysis of Materials
论文作者
论文摘要
监督的机器学习算法(例如图形神经网络(GNN))已成功预测了材料属性。但是,GNN的出色表现通常依赖于大型材料数据集上的端到端学习,这可能会失去有关材料多尺度信息的物理见解。标记数据的过程消耗了许多资源,不可避免地会引入错误,从而限制了预测的准确性。我们建议通过在晶体图的节点和边缘信息上进行自我监督的学习来训练GNN模型。与流行的手动构造材料描述符相比,自我监督的原子表示可以在材料特性上更好地预测性能。此外,它可以通过调整范围信息来提供物理见解。在磁矩数据集上应用自我监督的原子表示,我们展示了它们如何从磁性材料中提取规则和信息。为了将丰富的物理信息纳入GNN模型,我们开发了节点嵌入图形神经网络(NEGNN)框架,并在预测性能方面显示出显着改善。自我监督的材料表示和NEGNN框架可以从材料中调查深入的信息,并可以将预测准确性提高到小型数据集中。
Supervised machine learning algorithms, such as graph neural networks (GNN), have successfully predicted material properties. However, the superior performance of GNN usually relies on end-to-end learning on large material datasets, which may lose the physical insight of multi-scale information about materials. And the process of labeling data consumes many resources and inevitably introduces errors, which constrains the accuracy of prediction. We propose to train the GNN model by self-supervised learning on the node and edge information of the crystal graph. Compared with the popular manually constructed material descriptors, the self-supervised atomic representation can reach better prediction performance on material properties. Furthermore, it may provide physical insights by tuning the range information. Applying the self-supervised atomic representation on the magnetic moment datasets, we show how they can extract rules and information from the magnetic materials. To incorporate rich physical information into the GNN model, we develop the node embedding graph neural networks (NEGNN) framework and show significant improvements in the prediction performance. The self-supervised material representation and the NEGNN framework may investigate in-depth information from materials and can be applied to small datasets with increased prediction accuracy.