论文标题
运动引起的深度立体声
Movement-induced Priors for Deep Stereo
论文作者
论文摘要
我们提出了一种将立体视差估计与运动引起的先前信息融合的方法。我们将问题作为非参数学习任务而不是独立的推理框架,就暂时的高斯流程而言,该问题是在具有运动驱动式框架间推理的暂时性过程中。根据运动信息的可用性,我们提供了三个高斯过程内核的层次结构,我们的主要重点是用于具有低质量MEMS传感器的手持设备的新的陀螺仪驱动的内核,因此也放弃了拥有完整6D摄像头的需求。我们展示了如何将方法与两种最先进的深入立体声方法结合在一起。该方法要么以插件的方式使用预训练的深度立体声网络,要么通过共同培训内核与编码器架构结构,从而进一步改善,从而导致一致的改进。
We propose a method for fusing stereo disparity estimation with movement-induced prior information. Instead of independent inference frame-by-frame, we formulate the problem as a non-parametric learning task in terms of a temporal Gaussian process prior with a movement-driven kernel for inter-frame reasoning. We present a hierarchy of three Gaussian process kernels depending on the availability of motion information, where our main focus is on a new gyroscope-driven kernel for handheld devices with low-quality MEMS sensors, thus also relaxing the requirement of having full 6D camera poses available. We show how our method can be combined with two state-of-the-art deep stereo methods. The method either work in a plug-and-play fashion with pre-trained deep stereo networks, or further improved by jointly training the kernels together with encoder-decoder architectures, leading to consistent improvement.