论文标题

Di融合:在线隐式3D重建与深度先验

DI-Fusion: Online Implicit 3D Reconstruction with Deep Priors

论文作者

Huang, Jiahui, Huang, Shi-Sheng, Song, Haoxuan, Hu, Shi-Min

论文摘要

以前的在线3D密集的重建方法难以实现存储器存储与表面质量之间的平衡,这在很大程度上是由于使用停滞的基础几何表示,例如TSDF(TSDF(截断已签名的距离功能)或表面功能)或表面,而无需任何场景初步。在本文中,我们基于新颖的3D表示,即概率局部隐式体素(PLIVOX),以使用商品RGB-D摄像机在线3D重建,提出了Di融合(深层融合)。我们的PLIVOX编码场景先验,考虑了深层神经网络参数化的局部几何形状和不确定性。借助如此深刻的先验,我们能够执行在线隐式3D重建,以实现最先进的摄像头轨迹估计精度和映射质量,同时与以前的3D在线3D重建方法相比,可以提高存储效率。我们的实施可在https://www.github.com/huangjh-pub/di-fusion上获得。

Previous online 3D dense reconstruction methods struggle to achieve the balance between memory storage and surface quality, largely due to the usage of stagnant underlying geometry representation, such as TSDF (truncated signed distance functions) or surfels, without any knowledge of the scene priors. In this paper, we present DI-Fusion (Deep Implicit Fusion), based on a novel 3D representation, i.e. Probabilistic Local Implicit Voxels (PLIVoxs), for online 3D reconstruction with a commodity RGB-D camera. Our PLIVox encodes scene priors considering both the local geometry and uncertainty parameterized by a deep neural network. With such deep priors, we are able to perform online implicit 3D reconstruction achieving state-of-the-art camera trajectory estimation accuracy and mapping quality, while achieving better storage efficiency compared with previous online 3D reconstruction approaches. Our implementation is available at https://www.github.com/huangjh-pub/di-fusion.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源