论文标题

对联合矩阵分解的隐私威胁

Privacy Threats Against Federated Matrix Factorization

论文作者

Gao, Dashan, Tan, Ben, Ju, Ce, Zheng, Vincent W., Yang, Qiang

论文摘要

矩阵分解在实践推荐应用程序和电子商务中非常成功。由于数据短缺和严格的法规,很难收集足够的数据来为单个公司构建表现推荐系统。联合学习提供了桥接数据筒仓并建立机器学习模型的可能性,而不会损害隐私和安全性。共享普通用户或项目的参与者会协同建立一个模型,从所有参与者的数据上建立模型。有一些工作,探讨了联合学习在协作过滤系统中推荐系统和隐私问题的应用。但是,未研究联合矩阵分解中的隐私威胁。在本文中,我们根据特征空间的分配将联合矩阵分解分为三种类型,并分析针对每种类型的联合矩阵分解模型的隐私威胁。我们还讨论了保护隐私的方法。据我们所知,这是对联合学习框架中矩阵分解方法隐私威胁的首次研究。

Matrix Factorization has been very successful in practical recommendation applications and e-commerce. Due to data shortage and stringent regulations, it can be hard to collect sufficient data to build performant recommender systems for a single company. Federated learning provides the possibility to bridge the data silos and build machine learning models without compromising privacy and security. Participants sharing common users or items collaboratively build a model over data from all the participants. There have been some works exploring the application of federated learning to recommender systems and the privacy issues in collaborative filtering systems. However, the privacy threats in federated matrix factorization are not studied. In this paper, we categorize federated matrix factorization into three types based on the partition of feature space and analyze privacy threats against each type of federated matrix factorization model. We also discuss privacy-preserving approaches. As far as we are aware, this is the first study of privacy threats of the matrix factorization method in the federated learning framework.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源