论文标题

数据驱动的占用网格映射使用合成和现实世界数据

Data-Driven Occupancy Grid Mapping using Synthetic and Real-World Data

论文作者

van Kempen, Raphael, Lampe, Bastian, Reiher, Lennart, Woopen, Timo, Beemelmanns, Till, Eckstein, Lutz

论文摘要

在自动化车辆(AVS)数据驱动的感知任务中通常超过常规方法。这促使我们开发了一种数据驱动的方法,以从激光雷达测量值计算占用网格图(OGM)。我们的方法扩展了以前的工作,以使估计的环境表示现在包含一个由动态对象占据的单元格的额外层。较早的溶液只能区分自由和被占领的细胞。信息是否可以移动的信息对于规划AV的行为起着重要作用。我们提出了生成培训数据的两种方法。一种方法扩展了我们先前使用合成训练数据的工作,以便生成具有上述三个细胞状态的OGM。另一种方法使用Nuscenes数据集中的手动注释来创建培训数据。我们将两种模型的性能进行比较在对现实世界数据集的看不见数据的定量分析中。接下来,我们分析两种方法应对域移位的能力,即,当从不同车辆上的不同传感器上呈现LIDAR测量时。我们建议使用从现实世界数据评估中获得的信息来进一步缩小现实差距,并创建更好的合成数据,该数据可用于训练任意传感器配置的占用占用网格映射模型。代码可从https://github.com/ika-rwth-aachen/devilog获得。

In perception tasks of automated vehicles (AVs) data-driven have often outperformed conventional approaches. This motivated us to develop a data-driven methodology to compute occupancy grid maps (OGMs) from lidar measurements. Our approach extends previous work such that the estimated environment representation now contains an additional layer for cells occupied by dynamic objects. Earlier solutions could only distinguish between free and occupied cells. The information whether an obstacle could move plays an important role for planning the behavior of an AV. We present two approaches to generating training data. One approach extends our previous work on using synthetic training data so that OGMs with the three aforementioned cell states are generated. The other approach uses manual annotations from the nuScenes dataset to create training data. We compare the performance of both models in a quantitative analysis on unseen data from the real-world dataset. Next, we analyze the ability of both approaches to cope with a domain shift, i.e. when presented with lidar measurements from a different sensor on a different vehicle. We propose using information gained from evaluation on real-world data to further close the reality gap and create better synthetic data that can be used to train occupancy grid mapping models for arbitrary sensor configurations. Code is available at https://github.com/ika-rwth-aachen/DEviLOG.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源