论文标题
深度学习的定向隐私
Directional Privacy for Deep Learning
论文作者
论文摘要
差异化随机梯度下降(DP-SGD)是在深度学习模型培训中应用隐私的关键方法。它在训练过程中将各向同性高斯噪声应用于梯度,这可以在任何方向上扰动这些梯度,从而破坏效用。但是,公制DP可以基于任意指标提供替代机制,该指标可能更适合保留效用。在本文中,我们通过基于von mises-fisher(VMF)分布的机制应用\ textIt {定向隐私},以\ textit {Angular距离}的形式将梯度方向宽泛地保存。我们表明,这同时提供了$ε$ -DP和$εd$ - 用于深度学习培训的特权,而不是高斯机构的$(ε,δ)$ - $。然后,在关键数据集上进行的实验表明,VMF机制在公用事业贸易区权衡中的表现可以胜过高斯。特别是,我们的实验可以直接对两种方法之间的隐私性进行直接的经验比较,以防止重建和成员推理的能力。
Differentially Private Stochastic Gradient Descent (DP-SGD) is a key method for applying privacy in the training of deep learning models. It applies isotropic Gaussian noise to gradients during training, which can perturb these gradients in any direction, damaging utility. Metric DP, however, can provide alternative mechanisms based on arbitrary metrics that might be more suitable for preserving utility. In this paper, we apply \textit{directional privacy}, via a mechanism based on the von Mises-Fisher (VMF) distribution, to perturb gradients in terms of \textit{angular distance} so that gradient direction is broadly preserved. We show that this provides both $ε$-DP and $εd$-privacy for deep learning training, rather than the $(ε, δ)$-privacy of the Gaussian mechanism. Experiments on key datasets then indicate that the VMF mechanism can outperform the Gaussian in the utility-privacy trade-off. In particular, our experiments provide a direct empirical comparison of privacy between the two approaches in terms of their ability to defend against reconstruction and membership inference.