论文标题

说明说明

Explanation from Specification

论文作者

Naik, Harish, Turán, György

论文摘要

XAI算法中的可解释组件通常来自一组熟悉的模型,例如线性模型或决策树。我们制定了一种方法,其中产生的解释类型以规范为指导。规格是从用户引起的,可能会使用与用户的交互以及其他领域的贡献。可以获得规范的领域包括法医,医学和科学应用。在区域中提供可能的规格类型的菜单是算法设计师的探索性知识表示和推理任务,旨在了解有效计算的解释模式的可能性和局限性。讨论了两个示例:使用论证理论的贝叶斯网络的解释,以及图形神经网络的解释。后一种情况说明了用户可以使用代表形式主义来指定所请求的解释类型的可能性,例如一种用于分类分子的化学查询语言。这种方法是由科学哲学中的解释理论激发的,它与科学哲学在机器学习作用中的当前问题有关。

Explainable components in XAI algorithms often come from a familiar set of models, such as linear models or decision trees. We formulate an approach where the type of explanation produced is guided by a specification. Specifications are elicited from the user, possibly using interaction with the user and contributions from other areas. Areas where a specification could be obtained include forensic, medical, and scientific applications. Providing a menu of possible types of specifications in an area is an exploratory knowledge representation and reasoning task for the algorithm designer, aiming at understanding the possibilities and limitations of efficiently computable modes of explanations. Two examples are discussed: explanations for Bayesian networks using the theory of argumentation, and explanations for graph neural networks. The latter case illustrates the possibility of having a representation formalism available to the user for specifying the type of explanation requested, for example, a chemical query language for classifying molecules. The approach is motivated by a theory of explanation in the philosophy of science, and it is related to current questions in the philosophy of science on the role of machine learning.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源