论文标题
Adanerf:自适应抽样,用于实时渲染神经辐射场
AdaNeRF: Adaptive Sampling for Real-time Rendering of Neural Radiance Fields
论文作者
论文摘要
新的观点综合最近通过直接从稀疏观测来学习神经辐射场进行了革命。但是,使用这种新范式渲染图像的速度很慢,这是因为该量渲染方程的准确正交需要为每个射线提供大量样品。先前的工作主要集中于加快与每个样本点相关的网络评估,例如,通过将辐射值的缓存到显式的空间数据结构中,但这是以模型紧凑性为代价的。在本文中,我们提出了一种新颖的双网结构,该架构通过学习如何最好地减少所需的样品数量来实现正交方向。为此,我们将网络分为经过共同培训的采样和阴影网络。我们的培训计划沿每个射线采用固定的样品位置,并在整个训练中逐步引入稀疏性,即使在低样本计数下也可以达到高质量。对目标数量的数量进行微调后,可以实时渲染所得的紧凑神经表示。我们的实验表明,我们的方法在质量和框架速率方面超过同时紧凑的神经表示,并且与高效的混合表示相同。代码和补充材料可从https://thomasneff.github.io/adanerf获得。
Novel view synthesis has recently been revolutionized by learning neural radiance fields directly from sparse observations. However, rendering images with this new paradigm is slow due to the fact that an accurate quadrature of the volume rendering equation requires a large number of samples for each ray. Previous work has mainly focused on speeding up the network evaluations that are associated with each sample point, e.g., via caching of radiance values into explicit spatial data structures, but this comes at the expense of model compactness. In this paper, we propose a novel dual-network architecture that takes an orthogonal direction by learning how to best reduce the number of required sample points. To this end, we split our network into a sampling and shading network that are jointly trained. Our training scheme employs fixed sample positions along each ray, and incrementally introduces sparsity throughout training to achieve high quality even at low sample counts. After fine-tuning with the target number of samples, the resulting compact neural representation can be rendered in real-time. Our experiments demonstrate that our approach outperforms concurrent compact neural representations in terms of quality and frame rate and performs on par with highly efficient hybrid representations. Code and supplementary material is available at https://thomasneff.github.io/adanerf.