论文标题

语言模型作为事实检查器?

Language Models as Fact Checkers?

论文作者

Lee, Nayeon, Li, Belinda Z., Wang, Sinong, Yih, Wen-tau, Ma, Hao, Khabsa, Madian

论文摘要

最近的工作表明,语言模型(LMS)存储了从培训前数据中学到的常识性和事实知识。在本文中,我们利用这种隐含的知识来使用单独的语言模型创建有效的端到端事实检查器,而无需任何外部知识或明确的检索组件。虽然先前从LMS提取知识的工作重点是开放域问题回答的任务,但就我们的了解而言,这是第一个研究语言模型作为事实检查者的第一项工作。在封闭式设置中,我们表明我们的零弹性LM方法的表现优于标准发烧任务的随机基准,并且我们的微调LM与标准基线相比优惠。尽管我们最终没有胜过使用明确知识库的方法,但我们认为我们的探索表明,这种方法是可行的,并且有很大的探索空间。

Recent work has suggested that language models (LMs) store both common-sense and factual knowledge learned from pre-training data. In this paper, we leverage this implicit knowledge to create an effective end-to-end fact checker using a solely a language model, without any external knowledge or explicit retrieval components. While previous work on extracting knowledge from LMs have focused on the task of open-domain question answering, to the best of our knowledge, this is the first work to examine the use of language models as fact checkers. In a closed-book setting, we show that our zero-shot LM approach outperforms a random baseline on the standard FEVER task, and that our fine-tuned LM compares favorably with standard baselines. Though we do not ultimately outperform methods which use explicit knowledge bases, we believe our exploration shows that this method is viable and has much room for exploration.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源