论文标题

最小延迟深层在线视频稳定

Minimum Latency Deep Online Video Stabilization

论文作者

Zhang, Zhuofan, Liu, Zhen, Tan, Ping, Zeng, Bing, Liu, Shuaicheng

论文摘要

我们为在线视频稳定任务提供了一个新颖的相机路径优化框架。通常,稳定管道包括三个步骤:运动估计,路径平滑和新型视图渲染。大多数以前的方法都集中在运动估计上,提出了各种全球或局部运动模型。相反,路径优化受到相对较少的关注,尤其是在没有未来框架的重要在线环境中。在这项工作中,我们采用了最新的现成高质量的深度运动模型进行运动估算,以恢复摄像头轨迹并专注于后两个步骤。当输入并输出窗口中最后一个框架的稳定纱场时,我们的网络在滑动窗口中采用了短的2D摄像机路径,该框架将即将到来的框架扭曲到其稳定位置。混合损失的定义明确,以限制空间和时间的一致性。此外,我们构建了一个运动数据集,其中包含用于培训的稳定且不稳定的运动对。广泛的实验表明,我们的方法在定性和定量上均显着胜过最先进的在线方法,并且可以实现与离线方法相当的性能。我们的代码和数据集可从https://github.com/liuzhen03/nndvs获得

We present a novel camera path optimization framework for the task of online video stabilization. Typically, a stabilization pipeline consists of three steps: motion estimating, path smoothing, and novel view rendering. Most previous methods concentrate on motion estimation, proposing various global or local motion models. In contrast, path optimization receives relatively less attention, especially in the important online setting, where no future frames are available. In this work, we adopt recent off-the-shelf high-quality deep motion models for motion estimation to recover the camera trajectory and focus on the latter two steps. Our network takes a short 2D camera path in a sliding window as input and outputs the stabilizing warp field of the last frame in the window, which warps the coming frame to its stabilized position. A hybrid loss is well-defined to constrain the spatial and temporal consistency. In addition, we build a motion dataset that contains stable and unstable motion pairs for the training. Extensive experiments demonstrate that our approach significantly outperforms state-of-the-art online methods both qualitatively and quantitatively and achieves comparable performance to offline methods. Our code and dataset are available at https://github.com/liuzhen03/NNDVS

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源