论文标题

叙事文本理解的时间嵌入和变压器模型

Temporal Embeddings and Transformer Models for Narrative Text Understanding

论文作者

K, Vani, Mellace, Simone, Antonucci, Alessandro

论文摘要

我们为角色关系建模提供了两种深度学习方法,以了解叙事文本理解。这些关系的时间演变是通过动态词嵌入来描述的,这些单词嵌入旨在学习语义变化。对相应特征轨迹的经验分析表明,这种方法在描绘动态进化方面有效。基于最先进的变压器模型BERT的监督学习方法被用来检测字符之间的静态关系。经验验证表明,即使使用自动注释的数据,也可以很好地发现此类事件(例如,属于同一家族的两个字符)也可以很好地精确地发现。这为基于关键事实的识别提供了对叙事情节的更深入的理解。标准聚类技术最终用于角色去陈述,这是两种方法的必要预处理步骤。总体而言,深度学习模型似乎适合叙事文本理解,同时还为一般的自然语言理解提供了具有挑战性且没有开发的基准。

We present two deep learning approaches to narrative text understanding for character relationship modelling. The temporal evolution of these relations is described by dynamic word embeddings, that are designed to learn semantic changes over time. An empirical analysis of the corresponding character trajectories shows that such approaches are effective in depicting dynamic evolution. A supervised learning approach based on the state-of-the-art transformer model BERT is used instead to detect static relations between characters. The empirical validation shows that such events (e.g., two characters belonging to the same family) might be spotted with good accuracy, even when using automatically annotated data. This provides a deeper understanding of narrative plots based on the identification of key facts. Standard clustering techniques are finally used for character de-aliasing, a necessary pre-processing step for both approaches. Overall, deep learning models appear to be suitable for narrative text understanding, while also providing a challenging and unexploited benchmark for general natural language understanding.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源