论文标题
3D感知视频生成
3D-Aware Video Generation
论文作者
论文摘要
生成模型已成为许多图像综合和编辑任务的基本构件。该领域的最新进展也使得能够生成具有多视图或时间一致性的高质量3D或视频内容。在我们的工作中,我们探索了4D生成对抗网络(GAN),这些网络(GAN)学习了无条件的3D感知视频。通过将神经隐式表示与时间感知歧视器相结合,我们开发了一个GAN框架,该框架仅通过单眼视频进行监督的3D视频。我们表明,我们的方法学习了可分解的3D结构和动作的丰富嵌入,这些结构和动作可以使时空渲染的新视觉效果,同时以与现有3D或视频gan相当的质量产生图像。
Generative models have emerged as an essential building block for many image synthesis and editing tasks. Recent advances in this field have also enabled high-quality 3D or video content to be generated that exhibits either multi-view or temporal consistency. With our work, we explore 4D generative adversarial networks (GANs) that learn unconditional generation of 3D-aware videos. By combining neural implicit representations with time-aware discriminator, we develop a GAN framework that synthesizes 3D video supervised only with monocular videos. We show that our method learns a rich embedding of decomposable 3D structures and motions that enables new visual effects of spatio-temporal renderings while producing imagery with quality comparable to that of existing 3D or video GANs.