论文标题

对抗性贝叶斯模拟

Adversarial Bayesian Simulation

论文作者

Wang, Yuexi, Ročková, Veronika

论文摘要

在没有明确或易于处理的可能性的情况下,贝叶斯人通常诉诸于贝叶斯计算(ABC)进行推理。我们的工作基于生成的对抗网络(GAN)和对抗性变分贝叶斯,为ABC桥接了深神经隐式采样器。 ABC和GAN都比较了观察到的方面和假数据的各个方面,分别从后代和可能性中模拟。我们开发了一个贝叶斯gan(B-GAN)采样器,该采样器通过解决对抗性优化问题直接靶向后部。 B-GAN是由通过条件gan在ABC参考上学习的确定性映射驱动的。一旦训练了映射,就可以通过以微不足道的额外费用过滤噪声来获得IID后样品。我们使用(1)数据驱动的提案和(2)变化贝叶斯提出了两项​​后处理的本地改进。我们通过常见的bayesian结果支持我们的发现,表明对于某些神经网络发生器和歧视器,真实和近似后期之间的典型总变化距离将其收敛到零。我们对模拟数据的发现表明,相对于一些最近的无可能的后验模拟器,竞争性能高。

In the absence of explicit or tractable likelihoods, Bayesians often resort to approximate Bayesian computation (ABC) for inference. Our work bridges ABC with deep neural implicit samplers based on generative adversarial networks (GANs) and adversarial variational Bayes. Both ABC and GANs compare aspects of observed and fake data to simulate from posteriors and likelihoods, respectively. We develop a Bayesian GAN (B-GAN) sampler that directly targets the posterior by solving an adversarial optimization problem. B-GAN is driven by a deterministic mapping learned on the ABC reference by conditional GANs. Once the mapping has been trained, iid posterior samples are obtained by filtering noise at a negligible additional cost. We propose two post-processing local refinements using (1) data-driven proposals with importance reweighting, and (2) variational Bayes. We support our findings with frequentist-Bayesian results, showing that the typical total variation distance between the true and approximate posteriors converges to zero for certain neural network generators and discriminators. Our findings on simulated data show highly competitive performance relative to some of the most recent likelihood-free posterior simulators.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源