论文标题
多叙事语义重叠任务:评估和基准测试
Multi-Narrative Semantic Overlap Task: Evaluation and Benchmark
论文作者
论文摘要
在本文中,我们介绍了一个称为多叙事语义重叠(MNSO)的重要但相对未开发的NLP任务,该任务需要产生多个替代叙述的语义重叠。由于该任务不容易获得基准数据集,因此我们通过从网络上爬出2,925对叙事对创建了一个基准数据集,然后通过手动创建411个不同的地面真实语义重叠的过程,通过引入人类注释者。作为评估这项新任务的一种方式,我们首先通过借用文本夏令化文献的流行胭脂指标进行了系统的研究,并发现Rouge不适合我们的任务。随后,我们进行了进一步的人类注释/验证,以创建200个文档级别和1,518个句子级的基础真实标签,这些标签帮助我们制定了一种新的Precision-Recall样式评估度量,称为SEM-F1(语义F1)。实验结果表明,与Rouge Metric相比,提出的SEM-F1度量与人类判断以及更高的评价者构成相关性。
In this paper, we introduce an important yet relatively unexplored NLP task called Multi-Narrative Semantic Overlap (MNSO), which entails generating a Semantic Overlap of multiple alternate narratives. As no benchmark dataset is readily available for this task, we created one by crawling 2,925 narrative pairs from the web and then, went through the tedious process of manually creating 411 different ground-truth semantic overlaps by engaging human annotators. As a way to evaluate this novel task, we first conducted a systematic study by borrowing the popular ROUGE metric from text-summarization literature and discovered that ROUGE is not suitable for our task. Subsequently, we conducted further human annotations/validations to create 200 document-level and 1,518 sentence-level ground-truth labels which helped us formulate a new precision-recall style evaluation metric, called SEM-F1 (semantic F1). Experimental results show that the proposed SEM-F1 metric yields higher correlation with human judgement as well as higher inter-rater-agreement compared to ROUGE metric.