论文标题

AI公平与实用性的联合优化:以人为本的方法

Joint Optimization of AI Fairness and Utility: A Human-Centered Approach

论文作者

Zhang, Yunfeng, Bellamy, Rachel K. E., Varshney, Kush R.

论文摘要

如今,AI越来越多地用于许多高风险的决策应用中,其中公平是一个重要问题。已经有许多例子,即AI有偏见并做出可疑和不公平的决定。 AI研究界提出了许多方法来衡量和减轻不必要的偏见,但很少有人涉及人类政策制定者的意见。我们认为,由于不同的公平标准有时无法同时满足,并且由于实现公平性通常需要牺牲其他目标,例如模型准确性,因此取得并遵守人类政策制定者对如何在这些目标之间进行权衡的偏好。在本文中,我们提出了一个框架和一些示例方法,用于引起此类偏好并根据这些偏好优化AI模型。

Today, AI is increasingly being used in many high-stakes decision-making applications in which fairness is an important concern. Already, there are many examples of AI being biased and making questionable and unfair decisions. The AI research community has proposed many methods to measure and mitigate unwanted biases, but few of them involve inputs from human policy makers. We argue that because different fairness criteria sometimes cannot be simultaneously satisfied, and because achieving fairness often requires sacrificing other objectives such as model accuracy, it is key to acquire and adhere to human policy makers' preferences on how to make the tradeoff among these objectives. In this paper, we propose a framework and some exemplar methods for eliciting such preferences and for optimizing an AI model according to these preferences.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源