论文标题

弯曲现实:适应全景语义细分的变形变压器

Bending Reality: Distortion-aware Transformers for Adapting to Panoramic Semantic Segmentation

论文作者

Zhang, Jiaming, Yang, Kailun, Ma, Chaoxiang, Reiß, Simon, Peng, Kunyu, Stiefelhagen, Rainer

论文摘要

全景图像及其360度定向视图包含有关周围空间的详尽信息,为场景理解提供了丰富的基础。为了以强大的全景分割模型的形式展现这种潜力,大量昂贵的像素注释对于成功至关重要。可以使用此类注释,但主要用于狭窄的角孔相机图像,这些图像在架子上是训练全景模型的亚最佳资源。在360度全景中,扭曲和明显的图像功能分布阻碍了从富含注释的针孔域的转移,因此具有很大的性能。为了解决这个领域的差异,并从针孔和360度周围环境中汇集了语义注释,我们建议在可变形贴片嵌入(DPE)(DPE)和可变形MLP(DMLP)组件中学习对象变形和全景图像畸变,以使我们的变形金学的变形金学构成全景分段(Transoramic Sembistion semberations)。最后,我们通过生成多尺度的原型特征并将它们对齐在我们的相互典型适应(MPA)中对它们进行对齐,以使其在针孔和全景特征嵌入中共享语义,以进行无处不在的域适应性。在室内Stanford2D3D数据集上,我们与MPA的Trans4Pass保持与完全监督的最先进的性能相当的性能,从而减少了1,400多个标记的全景图。在室外密码数据集上,我们将最新的最新时间限制为14.39%,并将新栏定为56.38%。代码将在https://github.com/jamycheung/trans4pass上公开提供。

Panoramic images with their 360-degree directional view encompass exhaustive information about the surrounding space, providing a rich foundation for scene understanding. To unfold this potential in the form of robust panoramic segmentation models, large quantities of expensive, pixel-wise annotations are crucial for success. Such annotations are available, but predominantly for narrow-angle, pinhole-camera images which, off the shelf, serve as sub-optimal resources for training panoramic models. Distortions and the distinct image-feature distribution in 360-degree panoramas impede the transfer from the annotation-rich pinhole domain and therefore come with a big dent in performance. To get around this domain difference and bring together semantic annotations from pinhole- and 360-degree surround-visuals, we propose to learn object deformations and panoramic image distortions in the Deformable Patch Embedding (DPE) and Deformable MLP (DMLP) components which blend into our Transformer for PAnoramic Semantic Segmentation (Trans4PASS) model. Finally, we tie together shared semantics in pinhole- and panoramic feature embeddings by generating multi-scale prototype features and aligning them in our Mutual Prototypical Adaptation (MPA) for unsupervised domain adaptation. On the indoor Stanford2D3D dataset, our Trans4PASS with MPA maintains comparable performance to fully-supervised state-of-the-arts, cutting the need for over 1,400 labeled panoramas. On the outdoor DensePASS dataset, we break state-of-the-art by 14.39% mIoU and set the new bar at 56.38%. Code will be made publicly available at https://github.com/jamycheung/Trans4PASS.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源