论文标题

统一知识促使对客户服务对话进行预培训

Unified Knowledge Prompt Pre-training for Customer Service Dialogues

论文作者

He, Keqing, Wang, Jingang, Sun, Chaobo, Wu, Wei

论文摘要

对话机器人已广泛应用于客户服务方案,以提供及时且用户友好的体验。这些机器人必须对对话的适当域进行分类,了解用户的意图并产生适当的响应。现有的对话预训练模型仅针对多个对话任务而设计,而忽略了弱监督的客户服务对话中的专家知识。在本文中,我们提出了一个新颖的统一知识提示预训练框架,ufa(\ textbf {u} nified Model \ textbf {f}或\ textbf {a} ll任务),用于客户服务对话。我们将客户服务对话的所有任务作为统一的文本到文本生成任务,并引入知识驱动的及时策略,以共同从不同的对话任务中学习。我们将UFA预先培训UFA,从实用场景中收集的大型中国客户服务语料库中,并对自然语言理解(NLU)和自然语言产生(NLG)基准进行了重大改进。

Dialogue bots have been widely applied in customer service scenarios to provide timely and user-friendly experience. These bots must classify the appropriate domain of a dialogue, understand the intent of users, and generate proper responses. Existing dialogue pre-training models are designed only for several dialogue tasks and ignore weakly-supervised expert knowledge in customer service dialogues. In this paper, we propose a novel unified knowledge prompt pre-training framework, UFA (\textbf{U}nified Model \textbf{F}or \textbf{A}ll Tasks), for customer service dialogues. We formulate all the tasks of customer service dialogues as a unified text-to-text generation task and introduce a knowledge-driven prompt strategy to jointly learn from a mixture of distinct dialogue tasks. We pre-train UFA on a large-scale Chinese customer service corpus collected from practical scenarios and get significant improvements on both natural language understanding (NLU) and natural language generation (NLG) benchmarks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源