论文标题

与Bert的GroupWise查询性能预测

Groupwise Query Performance Prediction with BERT

论文作者

Chen, Xiaoyang, He, Ben, Sun, Le

论文摘要

虽然像Bert这样的大规模预训练的语言模型已提高了IR的最新技术,但其在查询性能预测(QPP)中的应用是基于单个查询的点式建模的。同时,最近的研究表明,一组文档的交叉注意建模可以有效地提高学习对级算法和基于BERT的重新排列的性能。为此,提出了一个基于BERT的群组QPP模型,其中对查询列表的排名上下文共同建模,以预测单个查询的相对性能。对三个标准TREC收集的广泛实验展示了我们方法的有效性。我们的代码可在https://github.com/verdurechen/group-qpp上找到。

While large-scale pre-trained language models like BERT have advanced the state-of-the-art in IR, its application in query performance prediction (QPP) is so far based on pointwise modeling of individual queries. Meanwhile, recent studies suggest that the cross-attention modeling of a group of documents can effectively boost performances for both learning-to-rank algorithms and BERT-based re-ranking. To this end, a BERT-based groupwise QPP model is proposed, in which the ranking contexts of a list of queries are jointly modeled to predict the relative performance of individual queries. Extensive experiments on three standard TREC collections showcase effectiveness of our approach. Our code is available at https://github.com/VerdureChen/Group-QPP.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源