论文标题
A-NESI:概率神经符号推断的可扩展近似方法
A-NeSI: A Scalable Approximate Method for Probabilistic Neurosymbolic Inference
论文作者
论文摘要
我们研究了将神经网络与象征性推理相结合的问题。最近引入了用于概率神经符号学习(PNL)的框架,例如DeepproBlog,执行指数时间精确推断,从而限制了PNL溶液的可伸缩性。我们介绍了近似的神经符号推理(A-NESI):PNL的新框架,它使用神经网络进行可扩展的近似推断。 a-nesi 1)在多项式时间内执行近似推断,而不会改变概率逻辑的语义; 2)使用背景知识生成的数据进行培训; 3)可以产生预测的象征解释; 4)可以保证在测试时对逻辑约束的满意度,这对于安全至关重要的应用至关重要。我们的实验表明,A-NESI是使用指数组合缩放率解决三个神经成像任务的第一种端到端方法。最后,我们的实验表明,A-Nesi可以实现解释性和安全性,而无需惩罚性能。
We study the problem of combining neural networks with symbolic reasoning. Recently introduced frameworks for Probabilistic Neurosymbolic Learning (PNL), such as DeepProbLog, perform exponential-time exact inference, limiting the scalability of PNL solutions. We introduce Approximate Neurosymbolic Inference (A-NeSI): a new framework for PNL that uses neural networks for scalable approximate inference. A-NeSI 1) performs approximate inference in polynomial time without changing the semantics of probabilistic logics; 2) is trained using data generated by the background knowledge; 3) can generate symbolic explanations of predictions; and 4) can guarantee the satisfaction of logical constraints at test time, which is vital in safety-critical applications. Our experiments show that A-NeSI is the first end-to-end method to solve three neurosymbolic tasks with exponential combinatorial scaling. Finally, our experiments show that A-NeSI achieves explainability and safety without a penalty in performance.