论文标题

通过核心稳定性的联邦学习公平

Fairness in Federated Learning via Core-Stability

论文作者

Chaudhury, Bhaskar Ray, Li, Linyi, Kang, Mintong, Li, Bo, Mehta, Ruta

论文摘要

联合学习提供了一个有效的范式,可以共同优化从丰富的分布式数据中受益的模型,同时保护数据隐私。但是,分布式数据的异质性性质使定义和确保当地代理之间的公平性具有挑战性。例如,由于其他具有低质量数据的代理商,具有高质量数据的代理商牺牲其性能是直观的“不公平”。目前,流行的平等和加权股权的公平措施遭受了上述陷阱的影响。在这项工作中,我们旨在正式表示这个问题,并使用合作游戏理论和社会选择理论的概念来解决这些公平性问题。我们将学习在联合环境中学习共享预测变量的任务作为一个公平的公共决策问题,然后定义核心稳定公平的概念:给定$ n $ $ n $,代理人没有$ s $可以通过根据其公用事业$ u_n $和$ u_s $和$ u_s $(即$ eq geq)来大大受益的代理$ s $。 u_n $)。核心稳定的预测因子对某些代理商的低质量本地数据具有强大的态度,此外,它们满足了比例性和帕累托(Pareto)的优势,这是社会选择中两个广受欢迎的公平性和效率概念。然后,我们提出了一种有效的联合学习方案,以优化核心稳定预测指标。当剂量的损失函数为凸时,Corefed确定了核心稳定的预测指标。当损失函数不是凸(如光滑的神经网络)时,Corefed还确定了近似核心稳定的预测指标。我们进一步显示了使用Kakutani的固定点定理在更一般的环境中存在核心稳定预测因子的存在。最后,我们从经验上验证了对两个现实世界数据集的分析,并且我们表明,与FedAvg相似的同时,Corefed具有更高的核心稳定性公平性。

Federated learning provides an effective paradigm to jointly optimize a model benefited from rich distributed data while protecting data privacy. Nonetheless, the heterogeneity nature of distributed data makes it challenging to define and ensure fairness among local agents. For instance, it is intuitively "unfair" for agents with data of high quality to sacrifice their performance due to other agents with low quality data. Currently popular egalitarian and weighted equity-based fairness measures suffer from the aforementioned pitfall. In this work, we aim to formally represent this problem and address these fairness issues using concepts from co-operative game theory and social choice theory. We model the task of learning a shared predictor in the federated setting as a fair public decision making problem, and then define the notion of core-stable fairness: Given $N$ agents, there is no subset of agents $S$ that can benefit significantly by forming a coalition among themselves based on their utilities $U_N$ and $U_S$ (i.e., $\frac{|S|}{N} U_S \geq U_N$). Core-stable predictors are robust to low quality local data from some agents, and additionally they satisfy Proportionality and Pareto-optimality, two well sought-after fairness and efficiency notions within social choice. We then propose an efficient federated learning protocol CoreFed to optimize a core stable predictor. CoreFed determines a core-stable predictor when the loss functions of the agents are convex. CoreFed also determines approximate core-stable predictors when the loss functions are not convex, like smooth neural networks. We further show the existence of core-stable predictors in more general settings using Kakutani's fixed point theorem. Finally, we empirically validate our analysis on two real-world datasets, and we show that CoreFed achieves higher core-stability fairness than FedAvg while having similar accuracy.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源