论文标题
机器指南,人类监督:与全球解释的互动学习
Machine Guides, Human Supervises: Interactive Learning with Global Explanations
论文作者
论文摘要
我们介绍了解释性指导学习(XGL),这是一种新颖的互动学习策略,在该策略中,机器指导人类主管选择分类器的信息示例。该指南是通过全球解释提供的,该解释总结了分类器在实例空间的不同区域的行为并暴露了其缺陷。与其他解释性的互动学习策略相比,它是机器发射并依赖于本地解释的,XGL旨在与机器提供的解释相对于分类器的质量销售的说明具有强大的态度。此外,XGL利用全球解释打开了人类发起的互动的黑色框,使主管能够选择挑战学习模型的信息示例。从理论上讲,通过绘制与交互式机器教学的链接,我们表明全球解释是指导主管的可行方法。我们的模拟表明,解释性的指导性学习避免了过度销售模型的质量,并且在模型质量方面比机器和人类发起的交互式学习策略相比表现相当或更好。
We introduce explanatory guided learning (XGL), a novel interactive learning strategy in which a machine guides a human supervisor toward selecting informative examples for a classifier. The guidance is provided by means of global explanations, which summarize the classifier's behavior on different regions of the instance space and expose its flaws. Compared to other explanatory interactive learning strategies, which are machine-initiated and rely on local explanations, XGL is designed to be robust against cases in which the explanations supplied by the machine oversell the classifier's quality. Moreover, XGL leverages global explanations to open up the black-box of human-initiated interaction, enabling supervisors to select informative examples that challenge the learned model. By drawing a link to interactive machine teaching, we show theoretically that global explanations are a viable approach for guiding supervisors. Our simulations show that explanatory guided learning avoids overselling the model's quality and performs comparably or better than machine- and human-initiated interactive learning strategies in terms of model quality.