论文标题

合成至真实概括的自动编码器:从简单到更复杂的场景

Autoencoder for Synthetic to Real Generalization: From Simple to More Complex Scenes

论文作者

Da Cruz, Steve Dias, Taetz, Bertram, Stifter, Thomas, Stricker, Didier

论文摘要

学习合成数据并将所得的属性转移到其真实对应物中是降低成本和提高机器学习安全性的重要挑战。在这项工作中,我们专注于自动编码器体系结构,并旨在学习由模拟和真实图像之间的域移动引起的归纳偏差不变的潜在空间表示形式。我们仅对合成图像进行训练,目前的方法可以提高普遍性,并改善语义的保存到增加视觉复杂性的真实数据集。我们表明,预训练的特征提取器(例如VGG)足以概括较低的复杂性图像,但是视觉上更复杂的场景需要进行其他改进。为此,我们演示了一种新的采样技术,该技术与图像的语义重要部分匹配,同时将其他部分随机,从而导致显着特征提取和疏忽了不重要的部分。这有助于对真实数据的概括,我们进一步表明我们的方法优于微调的分类模型。

Learning on synthetic data and transferring the resulting properties to their real counterparts is an important challenge for reducing costs and increasing safety in machine learning. In this work, we focus on autoencoder architectures and aim at learning latent space representations that are invariant to inductive biases caused by the domain shift between simulated and real images showing the same scenario. We train on synthetic images only, present approaches to increase generalizability and improve the preservation of the semantics to real datasets of increasing visual complexity. We show that pre-trained feature extractors (e.g. VGG) can be sufficient for generalization on images of lower complexity, but additional improvements are required for visually more complex scenes. To this end, we demonstrate a new sampling technique, which matches semantically important parts of the image, while randomizing the other parts, leads to salient feature extraction and a neglection of unimportant parts. This helps the generalization to real data and we further show that our approach outperforms fine-tuned classification models.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源