论文标题
Nase:通过神经体系结构搜索链接预测的学习知识图嵌入
NASE: Learning Knowledge Graph Embedding for Link Prediction via Neural Architecture Search
论文作者
论文摘要
链接预测是预测知识图(kg)中实体之间缺少连接的任务。尽管为链接预测任务提出了各种形式的模型,但其中大多数是根据几个知名数据集中的一些已知关系模式设计的。由于现实世界KGS的多样性和复杂性,因此很难设计一个适合所有数据集的模型。为了解决这个问题,以前的工作已尝试使用自动化的机器学习(AUTOML)来搜索给定数据集的最佳模型。但是,他们的搜索空间仅限于双线性模型系列。在本文中,我们为链接预测任务提出了一个新颖的神经体系结构搜索(NAS)框架。首先,输入三重态的嵌入由表示搜索模块完善。然后,在分数函数搜索模块中搜索预测分数。该框架需要一个更通用的搜索空间,这使我们能够利用几个主流模型系列,因此它可以实现更好的性能。我们放宽了搜索空间的连续性,以便可以使用基于梯度的搜索策略有效地优化体系结构。与几种最新方法相比,几个基准数据集的实验结果证明了我们方法的有效性。
Link prediction is the task of predicting missing connections between entities in the knowledge graph (KG). While various forms of models are proposed for the link prediction task, most of them are designed based on a few known relation patterns in several well-known datasets. Due to the diversity and complexity nature of the real-world KGs, it is inherently difficult to design a model that fits all datasets well. To address this issue, previous work has tried to use Automated Machine Learning (AutoML) to search for the best model for a given dataset. However, their search space is limited only to bilinear model families. In this paper, we propose a novel Neural Architecture Search (NAS) framework for the link prediction task. First, the embeddings of the input triplet are refined by the Representation Search Module. Then, the prediction score is searched within the Score Function Search Module. This framework entails a more general search space, which enables us to take advantage of several mainstream model families, and thus it can potentially achieve better performance. We relax the search space to be continuous so that the architecture can be optimized efficiently using gradient-based search strategies. Experimental results on several benchmark datasets demonstrate the effectiveness of our method compared with several state-of-the-art approaches.