论文标题
视觉和语言训练
Vision-and-Language Pretraining
论文作者
论文摘要
随着图像文本对的大量数据以及视觉和语言(V \&L)任务的多样性,学者在该研究领域引入了大量的深度学习模型。此外,近年来,转移学习还显示出在计算机愿景中的巨大成功,例如图像分类,对象检测等,以及在自然语言处理中进行问答,机器翻译等的自然语言处理。继承了转移学习精神,V \&L中的研究工作已经设计了在大型数据集中进行多个预处理的技术,以增强底层任务。本文的目的是提供当代V \&L预验证模型的全面修订。特别是,我们对预读预处理的方法进行了分类,以及最先进的视觉和语言预算模型的摘要。此外,还提供了培训数据集和下游任务的列表,以进一步将视角验证为V \&L预处理。最后,我们决定采取进一步的一步,讨论众多未来研究的方向。
With the burgeoning amount of data of image-text pairs and diversity of Vision-and-Language (V\&L) tasks, scholars have introduced an abundance of deep learning models in this research domain. Furthermore, in recent years, transfer learning has also shown tremendous success in Computer Vision for tasks such as Image Classification, Object Detection, etc., and in Natural Language Processing for Question Answering, Machine Translation, etc. Inheriting the spirit of Transfer Learning, research works in V\&L have devised multiple pretraining techniques on large-scale datasets in order to enhance the performance of downstream tasks. The aim of this article is to provide a comprehensive revision of contemporary V\&L pretraining models. In particular, we categorize and delineate pretraining approaches, along with the summary of state-of-the-art vision-and-language pretrained models. Moreover, a list of training datasets and downstream tasks is supplied to further polish the perspective into V\&L pretraining. Lastly, we decided to take a further step to discuss numerous directions for future research.