论文标题
上下文感知学习以自我发挥作用
Context-Aware Learning to Rank with Self-Attention
论文作者
论文摘要
学习排名是许多电子商务搜索引擎的关键组成部分。在学习排名时,人们有兴趣根据其用户的实用程序来优化项目列表的全局排序。受欢迎的方法通过优化点,成对或列表的损失来学习一个评分函数,该评分函数(即没有列表中其他项目的上下文)分别评分项目(即没有列表中其他项目的上下文)。然后按分数的降序排序列表。同一列表中存在的项目之间的可能相互作用在损失级别的训练阶段考虑。但是,在推论过程中,项目是单独评分的,并且不考虑它们之间的可能相互作用。在本文中,我们提出了一种环境感知的神经网络模型,该模型通过应用自我发挥机制来学习项目得分。因此,在培训和推理中,在列表中存在的所有其他项目的上下文中确定了给定项目的相关性。我们从经验上证明了基于自我注意力的神经体系结构的显着性能,尤其是在来自大型电子商务市场搜索日志的数据集中,尤其是在大规模电子商务市场Allegro.pl的数据集上。在流行,成对和列表方面的流行损失中,这种效果是一致的。在本文中,我们报告了MSLR-WEB30K的最新结果,这是对基准进行排名的学习。
Learning to rank is a key component of many e-commerce search engines. In learning to rank, one is interested in optimising the global ordering of a list of items according to their utility for users.Popular approaches learn a scoring function that scores items individually (i.e. without the context of other items in the list) by optimising a pointwise, pairwise or listwise loss. The list is then sorted in the descending order of the scores. Possible interactions between items present in the same list are taken into account in the training phase at the loss level. However, during inference, items are scored individually, and possible interactions between them are not considered. In this paper, we propose a context-aware neural network model that learns item scores by applying a self-attention mechanism. The relevance of a given item is thus determined in the context of all other items present in the list, both in training and in inference. We empirically demonstrate significant performance gains of self-attention based neural architecture over Multi-LayerPerceptron baselines, in particular on a dataset coming from search logs of a large scale e-commerce marketplace, Allegro.pl. This effect is consistent across popular pointwise, pairwise and listwise losses.Finally, we report new state-of-the-art results on MSLR-WEB30K, the learning to rank benchmark.