论文标题
向神经网络传授时间逻辑
Teaching Temporal Logics to Neural Networks
论文作者
论文摘要
我们研究神经符号计算中的两个基本问题:可以深入学习端到端的逻辑中有具有挑战性的问题,并且神经网络可以学习逻辑的语义。在这项工作中,我们专注于线性时间逻辑(LTL),因为它被广泛用于验证。我们在问题上训练变压器,以直接预测一个解决方案,即痕迹,以给定的LTL公式。培训数据是由经典求解器生成的,但是,该求解器仅为每个公式提供许多可能的解决方案之一。我们证明,训练这些特定的解决方案是足以进行公式的,并且变压器甚至可以预测解决方案,甚至可以从经典求解器时的文献中从基准中获得公式。变压器还推广到逻辑的语义:尽管它们经常偏离经典求解器发现的解决方案,但它们仍然可以预测大多数公式的正确解决方案。
We study two fundamental questions in neuro-symbolic computing: can deep learning tackle challenging problems in logics end-to-end, and can neural networks learn the semantics of logics. In this work we focus on linear-time temporal logic (LTL), as it is widely used in verification. We train a Transformer on the problem to directly predict a solution, i.e. a trace, to a given LTL formula. The training data is generated with classical solvers, which, however, only provide one of many possible solutions to each formula. We demonstrate that it is sufficient to train on those particular solutions to formulas, and that Transformers can predict solutions even to formulas from benchmarks from the literature on which the classical solver timed out. Transformers also generalize to the semantics of the logics: while they often deviate from the solutions found by the classical solvers, they still predict correct solutions to most formulas.