论文标题
与异质差异隐私的分散矩阵分解
Decentralized Matrix Factorization with Heterogeneous Differential Privacy
论文作者
论文摘要
常规的矩阵分解依赖于集中的用户数据收集以进行推荐,这可能会引入隐私泄漏的风险增加,尤其是当推荐人不受信任时。现有的差异私有矩阵分解方法要么假设推荐人是信任的,要么只能为所有用户和不受信任的建议者提供统一的隐私保护。在本文中,我们提出了一种新型的异质差异矩阵分解算法(称为HDPMF),以进行不受信任的推荐剂。据我们所知,我们是第一个在不受信任的建议方案中实现分散矩阵分解的异质差异隐私的人。具体而言,我们的框架使用创新的重新缩放计划使用修改的伸展机制,以在隐私和准确性之间获得更好的权衡。同时,通过正确分配隐私预算,我们可以在用户/项目中捕获均质的隐私偏好,但是在不同的用户/项目上都有异质的隐私偏好。理论分析证实,HDPMF提供了严格的隐私保证,详尽的实验证明了其优越性,尤其是在强大的隐私保证,高维模型和稀疏数据集情景中。
Conventional matrix factorization relies on centralized collection of users' data for recommendation, which might introduce an increased risk of privacy leakage especially when the recommender is untrusted. Existing differentially private matrix factorization methods either assume the recommender is trusted, or can only provide a uniform level of privacy protection for all users and items with untrusted recommender. In this paper, we propose a novel Heterogeneous Differentially Private Matrix Factorization algorithm (denoted as HDPMF) for untrusted recommender. To the best of our knowledge, we are the first to achieve heterogeneous differential privacy for decentralized matrix factorization in untrusted recommender scenario. Specifically, our framework uses modified stretching mechanism with an innovative rescaling scheme to achieve better trade off between privacy and accuracy. Meanwhile, by allocating privacy budget properly, we can capture homogeneous privacy preference within a user/item but heterogeneous privacy preference across different users/items. Theoretical analysis confirms that HDPMF renders rigorous privacy guarantee, and exhaustive experiments demonstrate its superiority especially in strong privacy guarantee, high dimension model and sparse dataset scenario.