论文标题

在顺序评估中建模和纠正偏差

Modeling and Correcting Bias in Sequential Evaluation

论文作者

Wang, Jingyan, Pananjady, Ashwin

论文摘要

我们考虑了顺序评估的问题,其中评估者以序列观察候选人,并以在线,不可撤销的方式为这些候选人分配分数。受到在这种环境中研究顺序偏见的心理学文献的激励 - 即,评估结果与候选人出现的顺序之间的依赖关系 - 我们为评估者的评级过程提出了一个自然模型,该模型捕获了缺乏此类任务固有的校准。我们进行众包实验,以展示模型的各个方面。然后,我们继续研究如何通过将其视为统计推断问题来纠正模型下的顺序偏差。我们提出了一个接近线性的时间,在线算法,以确保两个规范的排名指标可以保证。我们还证明,通过在两个指标中建立匹配的下限,我们的算法在理论上是最佳信息。最后,我们执行了许多数值实验,以表明我们的算法通常优于使用报告得分诱导的排名的方法,无论是在模拟还是在我们收集的众包数据中。

We consider the problem of sequential evaluation, in which an evaluator observes candidates in a sequence and assigns scores to these candidates in an online, irrevocable fashion. Motivated by the psychology literature that has studied sequential bias in such settings -- namely, dependencies between the evaluation outcome and the order in which the candidates appear -- we propose a natural model for the evaluator's rating process that captures the lack of calibration inherent to such a task. We conduct crowdsourcing experiments to demonstrate various facets of our model. We then proceed to study how to correct sequential bias under our model by posing this as a statistical inference problem. We propose a near-linear time, online algorithm for this task and prove guarantees in terms of two canonical ranking metrics. We also prove that our algorithm is information theoretically optimal, by establishing matching lower bounds in both metrics. Finally, we perform a host of numerical experiments to show that our algorithm often outperforms the de facto method of using the rankings induced by the reported scores, both in simulation and on the crowdsourcing data that we collected.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源