论文标题

通用方法用于应用可解释的AI进行异常检测

A general-purpose method for applying Explainable AI for Anomaly Detection

论文作者

Sipple, John, Youssef, Abdou

论文摘要

对可解释的AI(XAI)的需求已经很好,但是在监督学习范式之外,几乎没有发表相对较少的发表。本文着重于将解释性和解释性应用于无监督异常检测任务的原则方法。我们认为,解释性主要是一项算法,解释性主要是一项认知任务,并利用认知科学的见解,提出了一种通用方法,使用解释的异常方法来实践诊断。我们定义归因误差,并使用现实世界标记的数据集证明我们基于集成梯度(IG)的方法比替代方法所产生的归因误差明显低。

The need for explainable AI (XAI) is well established but relatively little has been published outside of the supervised learning paradigm. This paper focuses on a principled approach to applying explainability and interpretability to the task of unsupervised anomaly detection. We argue that explainability is principally an algorithmic task and interpretability is principally a cognitive task, and draw on insights from the cognitive sciences to propose a general-purpose method for practical diagnosis using explained anomalies. We define Attribution Error, and demonstrate, using real-world labeled datasets, that our method based on Integrated Gradients (IG) yields significantly lower attribution errors than alternative methods.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源