论文标题

多模式开放域对话

Multi-Modal Open-Domain Dialogue

论文作者

Shuster, Kurt, Smith, Eric Michael, Ju, Da, Weston, Jason

论文摘要

开放域对话剂的最新工作表明,在训练数据和模型大小中,可以通过大规模缩放来实现模型参与度和人性指标的显着改善(Adiwardana等,2020; Roller等,2020)。但是,如果我们想建立具有类似人类的能力的代理商,那么我们必须扩展仅处理文本。一个特别重要的话题是能够查看图像并就所感知的内容进行交流。为了使人类参与多模式对话,我们研究了最先进的开放域对话代理与最先进的视觉模型的组件相结合。我们研究结合了不同的图像融合方案和域自适应的预训练和微调策略,并表明我们的最佳产生模型在多模式对话中的表现优于强大的现有模型,同时在基于文本的对话中同时执行(纯文本)Blenderbot(Coller et and)Blenderbot(唯一的文本)Blenderbot(2020)。我们还调查并将安全组件纳入最终模型,并表明此类努力在引人入胜的指标方面并没有降低模型性能。

Recent work in open-domain conversational agents has demonstrated that significant improvements in model engagingness and humanness metrics can be achieved via massive scaling in both pre-training data and model size (Adiwardana et al., 2020; Roller et al., 2020). However, if we want to build agents with human-like abilities, we must expand beyond handling just text. A particularly important topic is the ability to see images and communicate about what is perceived. With the goal of engaging humans in multi-modal dialogue, we investigate combining components from state-of-the-art open-domain dialogue agents with those from state-of-the-art vision models. We study incorporating different image fusion schemes and domain-adaptive pre-training and fine-tuning strategies, and show that our best resulting model outperforms strong existing models in multi-modal dialogue while simultaneously performing as well as its predecessor (text-only) BlenderBot (Roller et al., 2020) in text-based conversation. We additionally investigate and incorporate safety components in our final model, and show that such efforts do not diminish model performance with respect to engagingness metrics.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源