论文标题

Minimax最优性(可能)并不意味着gan的分配学习

Minimax Optimality (Probably) Doesn't Imply Distribution Learning for GANs

论文作者

Chen, Sitan, Li, Jerry, Li, Yuanzhi, Meka, Raghu

论文摘要

可以说,生成对抗网络(GAN)理论中最基本的问题是要了解甘斯在多大程度上可以学习基础分布。理论和经验证据表明,经验培训目标的局部最优性不足。但是,它并不排除实现真正人群最小值最佳解决方案可能意味着分配学习的可能性。 在本文中,我们表明标准的加密假设表明,这种较强的状况仍然不足。 Namely, we show that if local pseudorandom generators (PRGs) exist, then for a large family of natural continuous target distributions, there are ReLU network generators of constant depth and polynomial size which take Gaussian random seeds so that (i) the output is far in Wasserstein distance from the target distribution, but (ii) no polynomially large Lipschitz discriminator ReLU network can detect this.这意味着,即使实现人口最小值的最佳解决方案,对于瓦斯坦斯坦的目标,也可能不足以在通常的统计意义上进行分配学习。我们的技术揭示了甘斯和PRG之间的密切联系,我们认为这将进一步了解甘恩的计算景观。

Arguably the most fundamental question in the theory of generative adversarial networks (GANs) is to understand to what extent GANs can actually learn the underlying distribution. Theoretical and empirical evidence suggests local optimality of the empirical training objective is insufficient. Yet, it does not rule out the possibility that achieving a true population minimax optimal solution might imply distribution learning. In this paper, we show that standard cryptographic assumptions imply that this stronger condition is still insufficient. Namely, we show that if local pseudorandom generators (PRGs) exist, then for a large family of natural continuous target distributions, there are ReLU network generators of constant depth and polynomial size which take Gaussian random seeds so that (i) the output is far in Wasserstein distance from the target distribution, but (ii) no polynomially large Lipschitz discriminator ReLU network can detect this. This implies that even achieving a population minimax optimal solution to the Wasserstein GAN objective is likely insufficient for distribution learning in the usual statistical sense. Our techniques reveal a deep connection between GANs and PRGs, which we believe will lead to further insights into the computational landscape of GANs.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源