论文标题

关于隐私 - 实用性权衡,有或不直接访问私人数据

On the Privacy-Utility Trade-off With and Without Direct Access to the Private Data

论文作者

Zamani, Amirreza, Oechtering, Tobias J., Skoglund, Mikael

论文摘要

我们研究了两个方案的信息理论隐私机制设计问题,其中私人数据可观察到或隐藏。在每种情况下,我们首先将有限的共同信息视为隐私泄漏标准,然后我们使用两个不同的每个字母隐私约束。在第一种情况下,代理观察到与私人数据$ x $相关的有用数据$ y $,并希望向用户披露有用的信息。隐私机制旨在生成公开的数据$ u $,该数据$ u $最大化有关$ y $的信息,同时满足有限的隐私泄漏约束。在第二种情况下,代理还可以访问私人数据。为此,我们首先通过放松独立条件,从而延长功能表示引理和强功能表示引理,从而允许某些泄漏找到具有不同隐私泄漏约束的第二种情况的下限。此外,考虑到不同的隐私约束,在第一种情况下得出了上限和下限。特别是,对于不允许泄漏的情况,我们的上限和下限改善了以前的边界。此外,将有限的共同信息视为隐私约束,我们表明,如果$ x $和$ y $之间的常见信息和互信息相等,那么第二种情况下已达到的上限就很紧。最后,研究了$ x $的一部分,即$ x_1 $的一部分比其余部分,即$ x_2 $更私密,而我们提供下层和上限。

We study an information theoretic privacy mechanism design problem for two scenarios where the private data is either observable or hidden. In each scenario, we first consider bounded mutual information as privacy leakage criterion, then we use two different per-letter privacy constraints. In the first scenario, an agent observes useful data $Y$ that is correlated with private data $X$, and wishes to disclose the useful information to a user. A privacy mechanism is designed to generate disclosed data $U$ which maximizes the revealed information about $Y$ while satisfying a bounded privacy leakage constraint. In the second scenario, the agent has additionally access to the private data. To this end, we first extend the Functional Representation Lemma and Strong Functional Representation Lemma by relaxing the independence condition and thereby allowing a certain leakage to find lower bounds for the second scenario with different privacy leakage constraints. Furthermore, upper and lower bounds are derived in the first scenario considering different privacy constraints. In particular, for the case where no leakage is allowed, our upper and lower bounds improve previous bounds. Moreover, considering bounded mutual information as privacy constraint we show that if the common information and mutual information between $X$ and $Y$ are equal, then the attained upper bound in the second scenario is tight. Finally, the privacy-utility trade-off with prioritized private data is studied where part of $X$, i.e., $X_1$, is more private than the remaining part, i.e., $X_2$, and we provide lower and upper bounds.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源