论文标题

探索端到端可区分的自然逻辑建模

Exploring End-to-End Differentiable Natural Logic Modeling

论文作者

Feng, Yufei, Zheng, Zi'ou, Liu, Quan, Greenspan, Michael, Zhu, Xiaodan

论文摘要

我们探索了端到端训练的可区分模型,这些模型将自然逻辑与神经网络相结合,旨在根据自然逻辑形式主义保持自然语言推理的骨干,同时引入亚符号矢量表示和神经成分。提出的模型适应模块网络来建模自然逻辑操作,该操作通过内存组件增强,以建模上下文信息。实验表明,与基线神经网络模型相比,所提出的框架可以有效地对基于单调性的推理进行建模,而没有内置的电感偏置,以基于单调性的推理。我们提出的模型表明,当从向下转移到向下推理时,这将是健壮的。我们对拟议模型在聚合中的性能进行进一步分析,显示了提议的亚组件在帮助实现更好的中间聚合性能方面的有效性。

We explore end-to-end trained differentiable models that integrate natural logic with neural networks, aiming to keep the backbone of natural language reasoning based on the natural logic formalism while introducing subsymbolic vector representations and neural components. The proposed model adapts module networks to model natural logic operations, which is enhanced with a memory component to model contextual information. Experiments show that the proposed framework can effectively model monotonicity-based reasoning, compared to the baseline neural network models without built-in inductive bias for monotonicity-based reasoning. Our proposed model shows to be robust when transferred from upward to downward inference. We perform further analyses on the performance of the proposed model on aggregation, showing the effectiveness of the proposed subcomponents on helping achieve better intermediate aggregation performance.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源