论文标题
NerFusion:用于大规模场景重建的辐射场融合
NeRFusion: Fusing Radiance Fields for Large-Scale Scene Reconstruction
论文作者
论文摘要
尽管NERF在神经重建和渲染方面取得了巨大的成功,但其MLP的容量有限,每场较长的每个场景优化时间使其在建模大型室内场景方面具有挑战性。相反,经典的3D重建方法可以处理大型场景,但不会产生逼真的效果图。我们提出了Nerfusion,这是一种结合NERF和基于TSDF的融合技术的优势,以实现有效的大规模重建和照相现实渲染。我们处理输入图像序列,以通过直接网络推断预测人均局部辐射字段。然后使用新型的复发神经网络融合这些,该网络在22 fps的实时实时重建全局,稀疏的场景表示。该全球量可以进一步进行微调,以提高渲染质量。我们证明,NerFusion在大型室内和小规模对象场景上都达到了最先进的质量,其重建速度比NERF和其他最近的方法更快。
While NeRF has shown great success for neural reconstruction and rendering, its limited MLP capacity and long per-scene optimization times make it challenging to model large-scale indoor scenes. In contrast, classical 3D reconstruction methods can handle large-scale scenes but do not produce realistic renderings. We propose NeRFusion, a method that combines the advantages of NeRF and TSDF-based fusion techniques to achieve efficient large-scale reconstruction and photo-realistic rendering. We process the input image sequence to predict per-frame local radiance fields via direct network inference. These are then fused using a novel recurrent neural network that incrementally reconstructs a global, sparse scene representation in real-time at 22 fps. This global volume can be further fine-tuned to boost rendering quality. We demonstrate that NeRFusion achieves state-of-the-art quality on both large-scale indoor and small-scale object scenes, with substantially faster reconstruction than NeRF and other recent methods.