论文标题

MM扩散:联合音频和视频生成的学习多模式扩散模型

MM-Diffusion: Learning Multi-Modal Diffusion Models for Joint Audio and Video Generation

论文作者

Ruan, Ludan, Ma, Yiyang, Yang, Huan, He, Huiguo, Liu, Bei, Fu, Jianlong, Yuan, Nicholas Jing, Jin, Qin, Guo, Baining

论文摘要

我们提出了第一个联合音频视频生成框架,该框架将同时参与观看和聆听体验带来高质量的现实视频。为了生成关节音频视频对,我们提出了一种新型的多模式扩散模型(即MM-diffusion),并使用两耦合的denoising自动编码器。与现有的单模式扩散模型相反,MM扩散由设计通过设计进行联合去索过程的顺序多模式U-NET组成。两个用于音频和视频的子网学会逐渐从高斯噪音中产生对齐的音频视频对。为了确保跨模态的语义一致性,我们提出了一个新型的基于随机转移的注意力块在两个子网上桥接,从而实现了有效的跨模式比对,从而增强了彼此的音频视频保真度。广泛的实验表明,无条件的音频视频生成和零摄像的条件任务(例如,视频到原告)的结果卓越。特别是,我们在景观和AIST ++跳舞数据集上实现了最佳的FVD和FAD。 10K投票的图灵测试进一步证明了我们模型的主要偏好。可以在https://github.com/researchmm/mm-diffusion下载代码和预培训模型。

We propose the first joint audio-video generation framework that brings engaging watching and listening experiences simultaneously, towards high-quality realistic videos. To generate joint audio-video pairs, we propose a novel Multi-Modal Diffusion model (i.e., MM-Diffusion), with two-coupled denoising autoencoders. In contrast to existing single-modal diffusion models, MM-Diffusion consists of a sequential multi-modal U-Net for a joint denoising process by design. Two subnets for audio and video learn to gradually generate aligned audio-video pairs from Gaussian noises. To ensure semantic consistency across modalities, we propose a novel random-shift based attention block bridging over the two subnets, which enables efficient cross-modal alignment, and thus reinforces the audio-video fidelity for each other. Extensive experiments show superior results in unconditional audio-video generation, and zero-shot conditional tasks (e.g., video-to-audio). In particular, we achieve the best FVD and FAD on Landscape and AIST++ dancing datasets. Turing tests of 10k votes further demonstrate dominant preferences for our model. The code and pre-trained models can be downloaded at https://github.com/researchmm/MM-Diffusion.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源