论文标题
司法系统中的机器学习公平性:基本费率,误报和假否定性
Machine Learning Fairness in Justice Systems: Base Rates, False Positives, and False Negatives
论文作者
论文摘要
机器学习的最佳实践陈述已经激增,但是关于标准的规定缺乏共识。特别是关于公平的标准,几乎没有关于如何在实践中实现公平的指导。具体而言,错误(假否定和误报)的公平性可能会构成如何设定权重,如何进行不可避免的权衡以及如何判断在种族群体中出现不同类型错误的模型的问题。本文考虑了一个种族群体的假货率更高的后果以及另一个种族群体的假否定率更高的后果。本文探讨了司法环境中不同的错误可能会给机器学习应用程序带来问题,解决方案折衷的计算限制以及如何通过与领导力,线工人,利益相关者和受影响社区的勇敢对话来制定解决方案。
Machine learning best practice statements have proliferated, but there is a lack of consensus on what the standards should be. For fairness standards in particular, there is little guidance on how fairness might be achieved in practice. Specifically, fairness in errors (both false negatives and false positives) can pose a problem of how to set weights, how to make unavoidable tradeoffs, and how to judge models that present different kinds of errors across racial groups. This paper considers the consequences of having higher rates of false positives for one racial group and higher rates of false negatives for another racial group. The paper examines how different errors in justice settings can present problems for machine learning applications, the limits of computation for resolving tradeoffs, and how solutions might have to be crafted through courageous conversations with leadership, line workers, stakeholders, and impacted communities.