论文标题

“为什么'芝加哥'欺骗性?”朝着人类建立模型驱动的教程

"Why is 'Chicago' deceptive?" Towards Building Model-Driven Tutorials for Humans

论文作者

Lai, Vivian, Liu, Han, Tan, Chenhao

论文摘要

为了通过机器学习模型来支持人类的决策,我们通常需要阐明嵌入在对人类不平衡,未知或违反直觉的模型中的模式。尽管现有的方法着重于在实时帮助下解释机器预测,但我们探讨了以模型为导向的教程,以帮助人类在培训阶段了解这些模式。我们考虑了这两个教程,其中包括科学论文的准则,类似于当前的科学传播实践,并自动从带有解释的培训数据中选择了示例。我们使用欺骗性的审查检测作为测试床,并进行大规模,随机的人类受试者实验来检查此类教程的有效性。我们发现,在有或没有实时帮助的情况下,教程确实可以改善人类绩效。特别是,尽管深度学习比简单模型提供了优越的预测性能,但是简单模型的教程和解释对人类更有用。我们的工作提出了以人为本的教程的未来指导,并解释了人类与人工智能之间的协同作用。

To support human decision making with machine learning models, we often need to elucidate patterns embedded in the models that are unsalient, unknown, or counterintuitive to humans. While existing approaches focus on explaining machine predictions with real-time assistance, we explore model-driven tutorials to help humans understand these patterns in a training phase. We consider both tutorials with guidelines from scientific papers, analogous to current practices of science communication, and automatically selected examples from training data with explanations. We use deceptive review detection as a testbed and conduct large-scale, randomized human-subject experiments to examine the effectiveness of such tutorials. We find that tutorials indeed improve human performance, with and without real-time assistance. In particular, although deep learning provides superior predictive performance than simple models, tutorials and explanations from simple models are more useful to humans. Our work suggests future directions for human-centered tutorials and explanations towards a synergy between humans and AI.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源