论文标题
对象光场的无监督发现和组成
Unsupervised Discovery and Composition of Object Light Fields
论文作者
论文摘要
连续和离散的神经场景表现形式最近已成为3D场景理解的强大新范式。最近的努力解决了以对象为中心的神经场景表示的无监督发现。然而,射线建设的高成本加剧了这样一个事实,即每个对象表示必须分别进行射线标准,从而导致采样不足以采样辐射场,从而在训练和训练和渲染过程中吵闹,帧速率较差,帧速率差,高记忆和时间复杂。在这里,我们建议在以对象为中心的构图场景表示中表示对象为光字段。我们提出了一个新颖的光场复合器模块,该模块可以从一组以对象为中心的光场重建全局光场。我们的方法被称为组成对象灯场(COLF),可以在标准数据集上无监督学习以对象为中心的神经场景表示,最新的重建和新颖的视图综合性能,以及比现有3D方法更快的范围内渲染和训练速度。
Neural scene representations, both continuous and discrete, have recently emerged as a powerful new paradigm for 3D scene understanding. Recent efforts have tackled unsupervised discovery of object-centric neural scene representations. However, the high cost of ray-marching, exacerbated by the fact that each object representation has to be ray-marched separately, leads to insufficiently sampled radiance fields and thus, noisy renderings, poor framerates, and high memory and time complexity during training and rendering. Here, we propose to represent objects in an object-centric, compositional scene representation as light fields. We propose a novel light field compositor module that enables reconstructing the global light field from a set of object-centric light fields. Dubbed Compositional Object Light Fields (COLF), our method enables unsupervised learning of object-centric neural scene representations, state-of-the-art reconstruction and novel view synthesis performance on standard datasets, and rendering and training speeds at orders of magnitude faster than existing 3D approaches.