论文标题
SAT2VID:单个卫星图像中的街道观看全景视频综合
Sat2Vid: Street-view Panoramic Video Synthesis from a Single Satellite Image
论文作者
论文摘要
我们提出了一种新颖的方法,用于从单个卫星图像和相机轨迹中综合时间和几何一致的街道观看全景视频。现有的跨视图合成方法集中在图像上,而在这种情况下的视频综合尚未受到足够的关注。对于几何和时间的一致性,我们的方法明确地创建了场景的3D点云表示,并在跨帧之间保持密集的3D-2D对应关系,以反映从卫星视图推断出的几何场景配置。至于3D空间中的合成,我们实现了一个层叠的网络体系结构,该架构具有两个沙漏模块,以从语义和每级潜在向量中生成点的粗糙和精细特征,然后投影到框架和升压模块,以获得最终的现实视频。通过利用计算的对应关系,产生的街道视频帧遵循3D几何场景结构并保持时间一致性。与其他最先进的合成方法相比,定性和定量实验表现出了优越的结果,这些方法要么缺乏时间一致性或现实的外观。据我们所知,我们的工作是第一个将跨视图图像合成视频的工作。
We present a novel method for synthesizing both temporally and geometrically consistent street-view panoramic video from a single satellite image and camera trajectory. Existing cross-view synthesis approaches focus on images, while video synthesis in such a case has not yet received enough attention. For geometrical and temporal consistency, our approach explicitly creates a 3D point cloud representation of the scene and maintains dense 3D-2D correspondences across frames that reflect the geometric scene configuration inferred from the satellite view. As for synthesis in the 3D space, we implement a cascaded network architecture with two hourglass modules to generate point-wise coarse and fine features from semantics and per-class latent vectors, followed by projection to frames and an upsampling module to obtain the final realistic video. By leveraging computed correspondences, the produced street-view video frames adhere to the 3D geometric scene structure and maintain temporal consistency. Qualitative and quantitative experiments demonstrate superior results compared to other state-of-the-art synthesis approaches that either lack temporal consistency or realistic appearance. To the best of our knowledge, our work is the first one to synthesize cross-view images to video.