论文标题

DFM:通用大规模对话任务学习的对话基础模型

DFM: Dialogue Foundation Model for Universal Large-Scale Dialogue-Oriented Task Learning

论文作者

Chen, Zhi, Bao, Jijia, Chen, Lu, Liu, Yuncong, Ma, Da, Chen, Bei, Wu, Mengyue, Zhu, Su, Dong, Xin, Ge, Fujiang, Miao, Qingliang, Lou, Jian-Guang, Yu, Kai

论文摘要

建立通用的对话代理一直是对话研究社区的长期目标。以前的大多数作品仅着眼于一小部分对话任务。在这项工作中,我们旨在建立一个统一的对话基础模型(DFM),该模型可用于解决庞大的多样化对话任务。为了实现这一目标,收集了具有丰富任务多样性(Dialogzoo)的大规模宣布的对话数据集。我们介绍了一个框架,以统一所有对话任务,并提出新颖的辅助自我监督任务,以在高度多样化的大规模对话过程中对DFM进行稳定的培训。实验表明,与相同尺寸的模型相比,DFM可以在非常丰富的跨域下游对话任务上实现最先进或竞争性能。这表明DFM在很大程度上扩展了统一对话预训练模型的能力。

Building a universal conversational agent has been a long-standing goal of the dialogue research community. Most previous works only focus on a small set of dialogue tasks. In this work, we aim to build a unified dialogue foundation model (DFM) which can be used to solve massive diverse dialogue tasks. To achieve this goal, a large-scale well-annotated dialogue dataset with rich task diversity (DialogZoo) is collected. We introduce a framework to unify all dialogue tasks and propose novel auxiliary self-supervised tasks to achieve stable training of DFM on the highly diverse large scale DialogZoo corpus. Experiments show that, compared with models of the same size, DFM can achieve state-of-the-art or competitive performance on very rich cross-domain downstream dialogue tasks. This demonstrates that DFM largely extends the ability of unified dialogue pre-trained model.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源