论文标题
学习背景知识结构用于常识性推理
Learning Contextualized Knowledge Structures for Commonsense Reasoning
论文作者
论文摘要
最近,知识图(KG)增强模型在各种常识性推理任务上取得了值得注意的成功。但是,kg边缘(事实)稀疏性和嘈杂的边缘提取/产生通常会阻碍模型获得有用的知识来推理。为了解决这些问题,我们提出了一个新的KG扬声器模型:混合图网络(HGN)。与先前的方法不同,HGN学会通过在统一的图结构中对两者进行推理来共同背景提取和生成的知识。鉴于任务输入上下文和提取的kg子图,HGN经过训练以生成子图缺失的边缘的嵌入,以形成“混合”图,然后在混合图上推理出杂交图,同时滤除上下文irrrexirrelevervant边缘。我们通过在四个常识性推理基准中的大量性能增长以及有关边缘有效性和乐于助人的用户研究来证明HGN的有效性。
Recently, knowledge graph (KG) augmented models have achieved noteworthy success on various commonsense reasoning tasks. However, KG edge (fact) sparsity and noisy edge extraction/generation often hinder models from obtaining useful knowledge to reason over. To address these issues, we propose a new KG-augmented model: Hybrid Graph Network (HGN). Unlike prior methods, HGN learns to jointly contextualize extracted and generated knowledge by reasoning over both within a unified graph structure. Given the task input context and an extracted KG subgraph, HGN is trained to generate embeddings for the subgraph's missing edges to form a "hybrid" graph, then reason over the hybrid graph while filtering out context-irrelevant edges. We demonstrate HGN's effectiveness through considerable performance gains across four commonsense reasoning benchmarks, plus a user study on edge validness and helpfulness.