论文标题
通过责任解释
Explainability via Responsibility
论文作者
论文摘要
通过机器学习(PCGML)生成程序内容是指使用机器学习模型创建游戏内容(例如平台器级别,游戏地图等)的一组方法。 PCGML方法依赖于黑匣子模型,这可能很难理解和调试,他们对机器学习没有专业知识。在共同创造的系统中,这可能更加棘手,在这些系统中,人类设计师必须与AI代理进行互动以生成游戏内容。在本文中,我们提出了一种可解释的人工智能的方法,其中向人类用户提供某些培训实例,以解释在共同创造过程中AI代理商的行动。我们通过近似于为人类用户提供AI代理行为的解释并帮助他们与AI代理更有效合作的解释来评估这种方法。
Procedural Content Generation via Machine Learning (PCGML) refers to a group of methods for creating game content (e.g. platformer levels, game maps, etc.) using machine learning models. PCGML approaches rely on black box models, which can be difficult to understand and debug by human designers who do not have expert knowledge about machine learning. This can be even more tricky in co-creative systems where human designers must interact with AI agents to generate game content. In this paper we present an approach to explainable artificial intelligence in which certain training instances are offered to human users as an explanation for the AI agent's actions during a co-creation process. We evaluate this approach by approximating its ability to provide human users with the explanations of AI agent's actions and helping them to more efficiently cooperate with the AI agent.