论文标题
通过图像翻译和声音增强音乐
Music Enhancement via Image Translation and Vocoding
论文作者
论文摘要
消费级音乐录音(例如由移动设备捕获的音乐录制)通常包含以背景噪声,混响和麦克风引起的等式的形式的扭曲。本文提出了一种深度学习方法,通过组合(i)在其MEL-Spectrogram表示中操纵音频的图像到图像翻译模型以及(ii)绘制合成生成的MEL-SPECTROGRAMPROMPROMPROMPROMPROMPROMPROMPORMES的MEL-SPECTROGRAGIN图来通过(ii)进行绘制的音乐录音模型来增强低质量的音乐录音的深度学习方法。我们发现,这种音乐增强方法的方法优于基线,这些基线使用经典方法进行MEL光谱反演和端到端方法直接映射噪声波形以清洁波形。此外,在通过听力测试评估所提出的方法时,我们分析了在音乐领域中使用的常见音频增强评估指标的可靠性。
Consumer-grade music recordings such as those captured by mobile devices typically contain distortions in the form of background noise, reverb, and microphone-induced EQ. This paper presents a deep learning approach to enhance low-quality music recordings by combining (i) an image-to-image translation model for manipulating audio in its mel-spectrogram representation and (ii) a music vocoding model for mapping synthetically generated mel-spectrograms to perceptually realistic waveforms. We find that this approach to music enhancement outperforms baselines which use classical methods for mel-spectrogram inversion and an end-to-end approach directly mapping noisy waveforms to clean waveforms. Additionally, in evaluating the proposed method with a listening test, we analyze the reliability of common audio enhancement evaluation metrics when used in the music domain.