论文标题
部分可观测时空混沌系统的无模型预测
Tensor4D : Efficient Neural 4D Decomposition for High-fidelity Dynamic Reconstruction and Rendering
论文作者
论文摘要
我们提出Tensor4D,这是一种有效但有效的动态场景建模方法。解决方案的关键是一种有效的4D张量分解方法,因此动态场景可以直接表示为4D时空张量。为了解决随附的内存问题,我们通过将其首先投影到三个时刻的卷中,然后将九个紧凑的特征飞机分解为分解4D张量。这样,随着时间的流逝,空间信息可以同时以紧凑而有效的方式捕获。在将Tensor4D应用于动态场景的重建和渲染时,我们将4D字段进一步分配到不同的尺度上,因为可以从粗糙到细节学习结构运动和动态详细的变化。我们方法的有效性在综合场景和现实世界中都得到了验证。广泛的实验表明,我们的方法能够实现高质量的动态重建,并从稀疏视图摄像头钻机甚至单眼相机渲染。代码和数据集将在https://liuyebin.com/tensor4d/tensor4d.html上发布。
We present Tensor4D, an efficient yet effective approach to dynamic scene modeling. The key of our solution is an efficient 4D tensor decomposition method so that the dynamic scene can be directly represented as a 4D spatio-temporal tensor. To tackle the accompanying memory issue, we decompose the 4D tensor hierarchically by projecting it first into three time-aware volumes and then nine compact feature planes. In this way, spatial information over time can be simultaneously captured in a compact and memory-efficient manner. When applying Tensor4D for dynamic scene reconstruction and rendering, we further factorize the 4D fields to different scales in the sense that structural motions and dynamic detailed changes can be learned from coarse to fine. The effectiveness of our method is validated on both synthetic and real-world scenes. Extensive experiments show that our method is able to achieve high-quality dynamic reconstruction and rendering from sparse-view camera rigs or even a monocular camera. The code and dataset will be released at https://liuyebin.com/tensor4d/tensor4d.html.