论文标题

学习扎根的一阶符号计划表示

Learning First-Order Symbolic Planning Representations That Are Grounded

论文作者

Liberman, Andrés Occhipinti, Bonet, Blai, Geffner, Hector

论文摘要

已经开发了两种主要方法,用于从非结构化数据中学习一阶计划(行动)模型:从状态空间的结构中产生清晰的动作模式的组合方法,并从图像代表的状态中产生动作模式的深度学习方法。以前方法的好处是,学识渊博的动作模式与可以手工写的那些相似。后者的一个好处是,学到的表示形式(谓词)基于图像,因此可以根据图像给出新的实例。在这项工作中,我们开发了一种新的公式,用于学习基于解析图像的清晰的一阶计划模型,这是结合两种方法的好处的步骤。假定解析的图像以一种简单的O2D语言(2D中的对象)给出,该语言涉及少数的一元和二进制谓词,例如“左”,“上方”,“形状”等。在学习后,可以用分析的图像对给出新的计划实例,一个用于初始情况,另一个用于目标。报告了包括块,索科班,IPC网格和河内在内的多个领域的学习和计划实验。

Two main approaches have been developed for learning first-order planning (action) models from unstructured data: combinatorial approaches that yield crisp action schemas from the structure of the state space, and deep learning approaches that produce action schemas from states represented by images. A benefit of the former approach is that the learned action schemas are similar to those that can be written by hand; a benefit of the latter is that the learned representations (predicates) are grounded on the images, and as a result, new instances can be given in terms of images. In this work, we develop a new formulation for learning crisp first-order planning models that are grounded on parsed images, a step to combine the benefits of the two approaches. Parsed images are assumed to be given in a simple O2D language (objects in 2D) that involves a small number of unary and binary predicates like "left", "above", "shape", etc. After learning, new planning instances can be given in terms of pairs of parsed images, one for the initial situation and the other for the goal. Learning and planning experiments are reported for several domains including Blocks, Sokoban, IPC Grid, and Hanoi.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源