论文标题
YAO团队在TRAXIFY 2022:使用预训练的模型和共同注意网络进行多模式事实验证
Team Yao at Factify 2022: Utilizing Pre-trained Models and Co-attention Networks for Multi-Modal Fact Verification
论文作者
论文摘要
近年来,社交媒体使用户能够暴露于无数的错误信息和虚假信息。因此,错误信息引起了研究领域和社会问题的广泛关注。为了解决该问题,我们提出了一个框架,即预先进行的框架,该框架由两个预训练的模型组成,用于从文本和图像中提取特征,以及多个共同注意网络,用于融合相同的模态,但不同的来源和不同的模态。此外,我们通过在预练习中使用不同的预训练模型来采用合奏方法来实现更好的性能。我们进一步说明了消融研究的有效性,并检查了不同的预训练模型以进行比较。我们的团队Yao在由De-Fatify @ AAAI 2022主持的Trovify挑战中获得了第五奖(F1得分:74.585 \%),这表明我们的模型在不使用辅助任务或额外信息的情况下实现了竞争性能。我们工作的源代码可在https://github.com/wywywang/multi-modal-fact-verification-2021上公开获得
In recent years, social media has enabled users to get exposed to a myriad of misinformation and disinformation; thus, misinformation has attracted a great deal of attention in research fields and as a social issue. To address the problem, we propose a framework, Pre-CoFact, composed of two pre-trained models for extracting features from text and images, and multiple co-attention networks for fusing the same modality but different sources and different modalities. Besides, we adopt the ensemble method by using different pre-trained models in Pre-CoFact to achieve better performance. We further illustrate the effectiveness from the ablation study and examine different pre-trained models for comparison. Our team, Yao, won the fifth prize (F1-score: 74.585\%) in the Factify challenge hosted by De-Factify @ AAAI 2022, which demonstrates that our model achieved competitive performance without using auxiliary tasks or extra information. The source code of our work is publicly available at https://github.com/wywyWang/Multi-Modal-Fact-Verification-2021