论文标题
通过学习论点来解释AI
Explainable AI through the Learning of Arguments
论文作者
论文摘要
学习论点与可解释的人工智能领域高度相关。这是一个符号机器学习技术的家族,特别是人性化的。这些技术将一组参数作为中间表示。论点是小规则,例外,可以将其束缚在更大的论点上,以做出预测或决策。我们研究了论证的学习,特别是从Verheij [34]提出的“案例模型”中学习论证的学习。 Verheij方法中的案例模型是法律环境中的案例或场景。案例模型中的案例数相对较少。在这里,我们调查了Verheij的方法是否可以用于从其他类型的数据集中学习参数,并使用更多的实例来学习。我们将案例模型中的论证的学习与英雄算法[15]和学习决策树进行比较。
Learning arguments is highly relevant to the field of explainable artificial intelligence. It is a family of symbolic machine learning techniques that is particularly human-interpretable. These techniques learn a set of arguments as an intermediate representation. Arguments are small rules with exceptions that can be chained to larger arguments for making predictions or decisions. We investigate the learning of arguments, specifically the learning of arguments from a 'case model' proposed by Verheij [34]. The case model in Verheij's approach are cases or scenarios in a legal setting. The number of cases in a case model are relatively low. Here, we investigate whether Verheij's approach can be used for learning arguments from other types of data sets with a much larger number of instances. We compare the learning of arguments from a case model with the HeRO algorithm [15] and learning a decision tree.