论文标题

Mumur:多语言多模式通用检索

MuMUR : Multilingual Multimodal Universal Retrieval

论文作者

Madasu, Avinash, Aflalo, Estelle, Stan, Gabriela Ben Melech, Rosenman, Shachar, Tseng, Shao-Yen, Bertasius, Gedas, Lal, Vasudev

论文摘要

随着视觉模型的发展,多模式检索已经取得了巨大的进步。但是,进一步改进这些模型需要其他标记的数据,这是一项巨大的手动努力。在本文中,我们提出了一个框架Mumur,该框架利用知识从多语言模型转移来提高多模式(图像和视频)检索的性能。我们首先使用最先进的机器翻译模型来构建伪真实的多语言视觉文本对。然后,我们使用这些数据来学习一个联合视觉文本表示,其中英语和非英语文本查询在基于预处理的多语言模型的常见嵌入空间中表示。我们在各种检索数据集上评估了我们提出的方法:五个视频检索数据集,例如MSRVTT,MSVD,DIDEMO,CHARADE,CHARADES和MSRVTT多语言,两个图像检索数据集,例如Flickr30k和Multi30k。实验结果表明,我们的方法可以在所有视频检索数据集上实现最新的结果,超过了先前的模型。此外,我们的框架Mumur显着击败了其他多语言视频检索数据集。我们还观察到Mumur在图像检索上表现出很强的性能。这证明了Mumur在所有视觉输入(图像和视频)以及文本输入(单语和多语言)上进行检索的通用能力。

Multi-modal retrieval has seen tremendous progress with the development of vision-language models. However, further improving these models require additional labelled data which is a huge manual effort. In this paper, we propose a framework MuMUR, that utilizes knowledge transfer from a multilingual model to boost the performance of multi-modal (image and video) retrieval. We first use state-of-the-art machine translation models to construct pseudo ground-truth multilingual visual-text pairs. We then use this data to learn a joint vision-text representation where English and non-English text queries are represented in a common embedding space based on pretrained multilingual models. We evaluate our proposed approach on a diverse set of retrieval datasets: five video retrieval datasets such as MSRVTT, MSVD, DiDeMo, Charades and MSRVTT multilingual, two image retrieval datasets such as Flickr30k and Multi30k . Experimental results demonstrate that our approach achieves state-of-the-art results on all video retrieval datasets outperforming previous models. Additionally, our framework MuMUR significantly beats other multilingual video retrieval dataset. We also observe that MuMUR exhibits strong performance on image retrieval. This demonstrates the universal ability of MuMUR to perform retrieval across all visual inputs (image and video) and text inputs (monolingual and multilingual).

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源