论文标题

T-VSE:基于变压器的视觉语义嵌入

T-VSE: Transformer-Based Visual Semantic Embedding

论文作者

Bastan, Muhammet, Ramisa, Arnau, Tek, Mehmet

论文摘要

由于在非常大的文本语料库中进行自我监管的预训练的新算法,变压器模型最近在NLP任务上取得了令人印象深刻的性能。相比之下,最近的文献表明,在标准基准上的跨模式图像/文本搜索任务(如MS Coco)上,简单的普通单词模型(例如,RNN和变形金刚)优于更复杂的语言模型。在本文中,我们表明数据集量表和培训策略至关重要,并证明基于变压器的跨模式嵌入在大型电子商务产品图像标题对的大型数据集中进行培训时,基于变压器的跨模式嵌入超过了单词平均值和基于RNN的嵌入。

Transformer models have recently achieved impressive performance on NLP tasks, owing to new algorithms for self-supervised pre-training on very large text corpora. In contrast, recent literature suggests that simple average word models outperform more complicated language models, e.g., RNNs and Transformers, on cross-modal image/text search tasks on standard benchmarks, like MS COCO. In this paper, we show that dataset scale and training strategy are critical and demonstrate that transformer-based cross-modal embeddings outperform word average and RNN-based embeddings by a large margin, when trained on a large dataset of e-commerce product image-title pairs.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源