论文标题

神经量超分辨率

Neural Volume Super-Resolution

论文作者

Bahat, Yuval, Zhang, Yuxuan, Sommerhoff, Hendrik, Kolb, Andreas, Heide, Felix

论文摘要

神经体积表示已成为3D场景中辐射场广泛采用的模型。这些表示形式是场景中瞬时体积辐射的完全隐式或混合功能近似器,通常是从场景的多视图捕获中学到的。我们研究了神经音量超分辨率的新任务 - 渲染与低分辨率捕获的场景相对应的高分辨率视图。为此,我们提出了一个直接在场景的体积表示上运行的神经超分辨率网络。这种方法使我们能够利用在体积域中运行的优势,即能够确保跨不同观看方向的一致超分辨率的能力。为了实现我们的方法,我们设计了一个新颖的3D表示形式,该表示在多个2D特征平面上取决于。这使我们可以通过在2D特征平面上应用2D卷积网络来超级溶解3D场景表示。我们通过对多种看不见的3D场景的多视图一致的观点进行超级分辨的多视图来验证所提出的方法,从而确认了与现有方法相比的定性和定量优惠的质量。

Neural volumetric representations have become a widely adopted model for radiance fields in 3D scenes. These representations are fully implicit or hybrid function approximators of the instantaneous volumetric radiance in a scene, which are typically learned from multi-view captures of the scene. We investigate the new task of neural volume super-resolution - rendering high-resolution views corresponding to a scene captured at low resolution. To this end, we propose a neural super-resolution network that operates directly on the volumetric representation of the scene. This approach allows us to exploit an advantage of operating in the volumetric domain, namely the ability to guarantee consistent super-resolution across different viewing directions. To realize our method, we devise a novel 3D representation that hinges on multiple 2D feature planes. This allows us to super-resolve the 3D scene representation by applying 2D convolutional networks on the 2D feature planes. We validate the proposed method by super-resolving multi-view consistent views on a diverse set of unseen 3D scenes, confirming qualitative and quantitatively favorable quality over existing approaches.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源