论文标题
KECP:知识增强的对比度提示了几次提取问题答案
KECP: Knowledge Enhanced Contrastive Prompting for Few-shot Extractive Question Answering
论文作者
论文摘要
提取问题回答(EQA)是机器阅读理解(MRC)中最重要的任务之一,可以通过微调跨度选择训练的语言模型(PLMS)的跨度来解决。但是,在几个射击学习方案中,大多数现有的MRC方法的表现可能会很差。为了解决这个问题,我们提出了一个新颖的框架,名为知识增强了对比度及时调整(KECP)。我们没有将指针头添加到PLMS,而是引入了EQA的开创性范式,该范式将任务转换为非自动回忆性掩盖语言建模(MLM)生成问题。同时,来自外部知识库(KB)和段落上下文的丰富语义支持增强查询的表示。此外,为了提高PLM的性能,我们通过MLM和对比度学习目标共同训练该模型。在多个基准上进行的实验表明,我们的方法始终在很大的边距下以几个射击设置的最先进方法优于最先进的方法。
Extractive Question Answering (EQA) is one of the most important tasks in Machine Reading Comprehension (MRC), which can be solved by fine-tuning the span selecting heads of Pre-trained Language Models (PLMs). However, most existing approaches for MRC may perform poorly in the few-shot learning scenario. To solve this issue, we propose a novel framework named Knowledge Enhanced Contrastive Prompt-tuning (KECP). Instead of adding pointer heads to PLMs, we introduce a seminal paradigm for EQA that transform the task into a non-autoregressive Masked Language Modeling (MLM) generation problem. Simultaneously, rich semantics from the external knowledge base (KB) and the passage context are support for enhancing the representations of the query. In addition, to boost the performance of PLMs, we jointly train the model by the MLM and contrastive learning objectives. Experiments on multiple benchmarks demonstrate that our method consistently outperforms state-of-the-art approaches in few-shot settings by a large margin.