论文标题

可控且无损的非自动回归端到端文本到语音

Controllable and Lossless Non-Autoregressive End-to-End Text-to-Speech

论文作者

Liu, Zhengxi, Tian, Qiao, Hu, Chenxu, Liu, Xudong, Wu, Menglin, Wang, Yuping, Zhao, Hang, Wang, Yuxuan

论文摘要

最近的一些研究表明,单阶段神经文本到语音的可行性,这无需生成MEL光谱图,而是直接从文本中生成原始波形。单阶段的文本到语音通常面临两个问题:a)由于多种语音变化而引起的一对多映射问题,b)由于缺乏训练期间地面真实性声学特征的监督,高频重建的不足。为了解决a)问题并产生更具表现力的语音,我们提出了一种基于差异自动编码器的新型音素级别的韵律建模方法,并具有标准化的流量,以模拟语音中的韵律信息。我们还使用韵律预测器来支持端到端表达语音综合。此外,我们建议双平行自动编码器在训练过程中介绍对地面原声特征的监督,以解决b)问题,使我们的模型能够产生高质量的语音。我们将综合质量与最先进的文本到语音系统的综合质量在内部表达式数据集上进行了比较。定性和定量评估都证明了我们方法对无损言语产生的优越性和鲁棒性,同时也表现出强大的韵律建模能力。

Some recent studies have demonstrated the feasibility of single-stage neural text-to-speech, which does not need to generate mel-spectrograms but generates the raw waveforms directly from the text. Single-stage text-to-speech often faces two problems: a) the one-to-many mapping problem due to multiple speech variations and b) insufficiency of high frequency reconstruction due to the lack of supervision of ground-truth acoustic features during training. To solve the a) problem and generate more expressive speech, we propose a novel phoneme-level prosody modeling method based on a variational autoencoder with normalizing flows to model underlying prosodic information in speech. We also use the prosody predictor to support end-to-end expressive speech synthesis. Furthermore, we propose the dual parallel autoencoder to introduce supervision of the ground-truth acoustic features during training to solve the b) problem enabling our model to generate high-quality speech. We compare the synthesis quality with state-of-the-art text-to-speech systems on an internal expressive English dataset. Both qualitative and quantitative evaluations demonstrate the superiority and robustness of our method for lossless speech generation while also showing a strong capability in prosody modeling.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源