论文标题
探索基于语音的自我监督学习模型的有效融合算法
Exploring Effective Fusion Algorithms for Speech Based Self-Supervised Learning Models
论文作者
论文摘要
自学学习(SSL)在包括语音处理在内的各个领域取得了巨大的成功。最近,证明基于语音的SSL模型能够与SupperB基准中的传统手工艺功能(例如Fbank,MFCC)相比,在一系列下游任务上提取出色的通用表示形式。但是,不同类型的SSL模型可能在不同的下游任务上表现出明显的优势。为了更好地利用SSL模型的潜在功能,在这项工作中,我们探讨了多个SSL模型的有效融合。研究了一系列模型融合算法,并通过将两种类型的SSL模型Hubert和Data2Vec结合在Superb基准的两个代表性任务上,它们是说话者识别(SID)和自动语音识别(ASR)任务。实验结果表明,我们提出的融合算法可以显着增强单个模型。
Self-supervised learning (SSL) has achieved great success in various areas including speech processing. Recently, it is proven that speech based SSL models are able to extract superior universal representations on a range of downstream tasks compared to traditional hand-craft feature (e.g. FBank, MFCC) in the SUPERB benchmark. However, different types of SSL models might exhibit distinct strengths on different downstream tasks. In order to better utilize the potential power of SSL models, in this work, we explore the effective fusion on multiple SSL models. A series of model fusion algorithms are investigated and compared by combining two types of SSL models, Hubert and Data2vec, on two representative tasks from SUPERB benchmark, which are speaker identification (SID) and automatic speech recognition (ASR) tasks. The experimental results demonstrate that our proposed fusion algorithms can further boost the individual model significantly.