论文标题
pea-kd:伯特的参数效率和准确的知识蒸馏
Pea-KD: Parameter-efficient and Accurate Knowledge Distillation on BERT
论文作者
论文摘要
我们如何在保持其性能的同时有效地压缩模型?知识蒸馏(KD)是众所周知的模型压缩方法之一。从本质上讲,KD基于较大的教师模型训练一个较小的学生模型,并试图尽可能保留教师模型的表现水平。但是,现有的KD方法受到以下限制。首先,由于学生模型的绝对大小较小,因此本质上缺乏模型容量。其次,缺乏学生模型的初始指南,使学生很难充分模仿教师模型。由于这些局限性,常规KD方法的性能较低。在本文中,我们提出了PEA-KD(参数效率高,准确的知识蒸馏),这是一种新颖的KD方法。 PEA-KD由两个主要部分组成:洗牌参数共享(SP)和教师的预测(PTP)。使用这种组合,我们能够减轻KD的局限性。 SPS是一种新的参数共享方法,可提高学生模型的能力。 PTP是一种KD特殊的初始化方法,可以作为学生的良好初始指南。合并后,此方法会大大提高学生模型的表现。在BERT上使用不同数据集和任务进行的实验表明,在四个胶水任务中,提出的方法平均将学生模型的性能提高了4.4 \%,从而优于现有的KD基准,而不是大量的余量。
How can we efficiently compress a model while maintaining its performance? Knowledge Distillation (KD) is one of the widely known methods for model compression. In essence, KD trains a smaller student model based on a larger teacher model and tries to retain the teacher model's level of performance as much as possible. However, existing KD methods suffer from the following limitations. First, since the student model is smaller in absolute size, it inherently lacks model capacity. Second, the absence of an initial guide for the student model makes it difficult for the student to imitate the teacher model to its fullest. Conventional KD methods yield low performance due to these limitations. In this paper, we propose Pea-KD (Parameter-efficient and accurate Knowledge Distillation), a novel approach to KD. Pea-KD consists of two main parts: Shuffled Parameter Sharing (SPS) and Pretraining with Teacher's Predictions (PTP). Using this combination, we are capable of alleviating the KD's limitations. SPS is a new parameter sharing method that increases the student model capacity. PTP is a KD-specialized initialization method, which can act as a good initial guide for the student. When combined, this method yields a significant increase in student model's performance. Experiments conducted on BERT with different datasets and tasks show that the proposed approach improves the student model's performance by 4.4\% on average in four GLUE tasks, outperforming existing KD baselines by significant margins.