论文标题

通过锚式视觉语言空间对齐的零拍图像字幕

Zero-shot Image Captioning by Anchor-augmented Vision-Language Space Alignment

论文作者

Wang, Junyang, Zhang, Yi, Yan, Ming, Zhang, Ji, Sang, Jitao

论文摘要

剪辑(对比性语言图像预训练)在跨模式相关任务(例如视觉分类和图像检索)中显示出显着的零射传递功能。但是,它在跨模式生成任务(例如零摄像图像字幕)中的性能仍然不满意。在这项工作中,我们讨论了直接使用零拍图像字幕的剪辑更依赖上下文中的文本模式,并且在很大程度上忽略了视觉信息,我们称之为\ emph {contextual语言先验}。为了解决这个问题,我们提出了跨模式模型(CLM),以促进无监督的跨模式学习。我们进一步提出了锚点增强,以指导生成模型的注意力对夹子表示的细粒度信息。对Coco和Flickr 30k的实验验证了在字幕质量和计算效率方面验证了拟议方法的有希望的性能。

CLIP (Contrastive Language-Image Pre-Training) has shown remarkable zero-shot transfer capabilities in cross-modal correlation tasks such as visual classification and image retrieval. However, its performance in cross-modal generation tasks like zero-shot image captioning remains unsatisfied. In this work, we discuss that directly employing CLIP for zero-shot image captioning relies more on the textual modality in context and largely ignores the visual information, which we call \emph{contextual language prior}. To address this, we propose Cross-modal Language Models (CLMs) to facilitate unsupervised cross-modal learning. We further propose Anchor Augment to guide the generative model's attention to the fine-grained information in the representation of CLIP. Experiments on MS COCO and Flickr 30K validate the promising performance of proposed approach in both captioning quality and computational efficiency.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源