论文标题

过程结果预测中的解释性:获得可解释和忠实模型的指南

Explainability in Process Outcome Prediction: Guidelines to Obtain Interpretable and Faithful Models

论文作者

Stevens, Alexander, De Smedt, Johannes

论文摘要

尽管在预测过程监控的领域已经进行了最近的转变以使用可解释的人工智能领域的模型,但评估仍然主要通过基于绩效的指标进行,因此不能考虑解释的可行性和含义。在本文中,我们通过解释性的解释性和在过程结果预测领域中解释性模型的忠诚来定义解释性。沿事件,情况和控制流透视图分析了引入的属性,这对于基于过程的分析是典型的。这允许将固有创建的解释与事后解释进行比较。我们在13个现实生活中的日志中基准了七个分类器,这些分类器涵盖了一系列透明和非透明的机器学习和深度学习模型,并进一步补充了可解释性技术。接下来,本文贡献了一组名为X-MOP的准则,该准则允许根据事件日志规范选择适当的模型,通过洞悉如何在过程结果预测中典型的变化预处理,模型的复杂性和解释性技术影响该模型的解释性。

Although a recent shift has been made in the field of predictive process monitoring to use models from the explainable artificial intelligence field, the evaluation still occurs mainly through performance-based metrics, thus not accounting for the actionability and implications of the explanations. In this paper, we define explainability through the interpretability of the explanations and the faithfulness of the explainability model in the field of process outcome prediction. The introduced properties are analysed along the event, case, and control flow perspective which are typical for a process-based analysis. This allows comparing inherently created explanations with post-hoc explanations. We benchmark seven classifiers on thirteen real-life events logs, and these cover a range of transparent and non-transparent machine learning and deep learning models, further complemented with explainability techniques. Next, this paper contributes a set of guidelines named X-MOP which allows selecting the appropriate model based on the event log specifications, by providing insight into how the varying preprocessing, model complexity and explainability techniques typical in process outcome prediction influence the explainability of the model.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源