论文标题
寻找描述逻辑索引的小证据:理论和实践(扩展技术报告)
Finding Small Proofs for Description Logic Entailments: Theory and Practice (Extended Technical Report)
论文作者
论文摘要
基于逻辑的AI方法具有一个优势,即可以通过向用户提供派生后果的证据来解释其行为。但是,如果这些证据变得很大,那么即使单个推导步骤易于理解,也可能很难理解后果。这激发了我们兴趣寻找描述逻辑(DL)需要的小证据。我们没有专注于该DL的特定DL和证明演算,而是引入了一个通用框架,其中证明被标记为标记,有向超图,每个HyperEdge都对应于单个声音派生步骤。从理论方面来说,我们研究了确定某个后果是否具有以下正交维度最多$ n $的大小证明的复杂性:(i)基础证明系统是多项式或指数; (ii)证据可能会或可能不会再利用已经造成的后果; (iii)数字$ n $表示为单位或二进制。除了这些选项的可能组合之一之外,我们已经确定了此决策问题的确切最严重的复杂性。从实际方面来说,我们已经开发了一种基于名为“遗忘”的非标准推理任务来生成表达DLS的证明的方法。我们已经在一组现实的本体学上评估了这种方法,并将获得的证据与DL推理器麋鹿产生的证据进行了比较,发现基于忘记的证据通常更好。不同的证明复杂性衡量标准。
Logic-based approaches to AI have the advantage that their behaviour can in principle be explained by providing their users with proofs for the derived consequences. However, if such proofs get very large, then it may be hard to understand a consequence even if the individual derivation steps are easy to comprehend. This motivates our interest in finding small proofs for Description Logic (DL) entailments. Instead of concentrating on a specific DL and proof calculus for this DL, we introduce a general framework in which proofs are represented as labeled, directed hypergraphs, where each hyperedge corresponds to a single sound derivation step. On the theoretical side, we investigate the complexity of deciding whether a certain consequence has a proof of size at most $n$ along the following orthogonal dimensions: (i) the underlying proof system is polynomial or exponential; (ii) proofs may or may not reuse already derived consequences; and (iii) the number $n$ is represented in unary or binary. We have determined the exact worst-case complexity of this decision problem for all but one of the possible combinations of these options. On the practical side, we have developed and implemented an approach for generating proofs for expressive DLs based on a non-standard reasoning task called forgetting. We have evaluated this approach on a set of realistic ontologies and compared the obtained proofs with proofs generated by the DL reasoner ELK, finding that forgetting-based proofs are often better w.r.t. different measures of proof complexity.