论文标题
Privfairfl:联合学习中保留隐私的团体公平性
PrivFairFL: Privacy-Preserving Group Fairness in Federated Learning
论文作者
论文摘要
群体公平确保基于机器学习的结果(ML)决策系统的结果不会偏向于某些由性别或种族等敏感属性定义的人。在联合学习(FL)中实现群体公平性是具有挑战性的,因为缓解偏差固有地需要使用所有客户的敏感属性值,而FL则旨在通过不给客户数据访问来保护隐私。正如我们在本文中所显示的那样,可以通过将FL与安全的多方计算(MPC)和差异隐私(DP)相结合来解决FL中的公平与隐私之间的冲突。在此过程中,我们提出了一种在完整和正式的隐私保证下培训跨设备FL中的小组最大ML模型的方法,而无需客户披露其敏感属性值。
Group fairness ensures that the outcome of machine learning (ML) based decision making systems are not biased towards a certain group of people defined by a sensitive attribute such as gender or ethnicity. Achieving group fairness in Federated Learning (FL) is challenging because mitigating bias inherently requires using the sensitive attribute values of all clients, while FL is aimed precisely at protecting privacy by not giving access to the clients' data. As we show in this paper, this conflict between fairness and privacy in FL can be resolved by combining FL with Secure Multiparty Computation (MPC) and Differential Privacy (DP). In doing so, we propose a method for training group-fair ML models in cross-device FL under complete and formal privacy guarantees, without requiring the clients to disclose their sensitive attribute values.