论文标题
Fair-N:结构化数据的公平和强大的神经网络
FaiR-N: Fair and Robust Neural Networks for Structured Data
论文作者
论文摘要
当个人受到高级域中模型的自动决定的约束时,机器学习的公平性至关重要。采用这些模型的组织也可能需要满足促进负责任和道德A.I.的法规尽管依赖于跨亚种群比较模型错误率的公平指标已被广泛研究以检测和缓解偏见,但相对尚未探索均衡的均衡能力来实现不同受保护属性组的追索权的公平性。 We present a novel formulation for training neural networks that considers the distance of data points to the decision boundary such that the new objective: (1) reduces the average distance to the decision boundary between two groups for individuals subject to a negative outcome in each group, i.e. the network is more fair with respect to the ability to obtain recourse, and (2) increases the average distance of data points to the boundary to promote adversarial robustness.我们证明,通过这种损失的培训可以产生更公平,强大的神经网络,其精度与未经训练的模型相似。此外,我们在定性上激励并从经验上表明,群体之间的追索权差异也提高了依赖错误率的公平措施。据我们所知,这是首次考虑跨组的追索能力来训练更公平的神经网络,并且研究了基于错误率的公平与基于诉讼的公平性之间的关系。
Fairness in machine learning is crucial when individuals are subject to automated decisions made by models in high-stake domains. Organizations that employ these models may also need to satisfy regulations that promote responsible and ethical A.I. While fairness metrics relying on comparing model error rates across subpopulations have been widely investigated for the detection and mitigation of bias, fairness in terms of the equalized ability to achieve recourse for different protected attribute groups has been relatively unexplored. We present a novel formulation for training neural networks that considers the distance of data points to the decision boundary such that the new objective: (1) reduces the average distance to the decision boundary between two groups for individuals subject to a negative outcome in each group, i.e. the network is more fair with respect to the ability to obtain recourse, and (2) increases the average distance of data points to the boundary to promote adversarial robustness. We demonstrate that training with this loss yields more fair and robust neural networks with similar accuracies to models trained without it. Moreover, we qualitatively motivate and empirically show that reducing recourse disparity across groups also improves fairness measures that rely on error rates. To the best of our knowledge, this is the first time that recourse capabilities across groups are considered to train fairer neural networks, and a relation between error rates based fairness and recourse based fairness is investigated.