论文标题
PAVI:板损坏的变异推理
PAVI: Plate-Amortized Variational Inference
论文作者
论文摘要
鉴于一些观察到的数据和概率生成模型,贝叶斯推论旨在获得可能产生数据的模型潜在参数的分布。对于大型人群研究,这项任务具有挑战性,在大量人群研究中,在数百名受试者的队列中进行了数千次测量,从而产生了庞大的潜在参数空间。这种较大的基数使现成的变异推理(VI)在计算上是不切实际的。在这项工作中,我们设计了可以有效解决大型人群研究的结构化VI家族。为此,我们的主要思想是在不同的I.I.D.上共享参数化和学习。由模型板的生成模型中的变量。我们将此概念板摊销命名,并说明了其权利的强大协同作用,从而使表达性,比较参数化和更快的数量级来训练大型层次层次变化分布。我们通过一个充满挑战的神经影像学示例来说明PAVI的实际实用性,该示例具有100万个潜在参数,这表明了朝着可扩展和表现力的变异推理迈出的重要一步。
Given some observed data and a probabilistic generative model, Bayesian inference aims at obtaining the distribution of a model's latent parameters that could have yielded the data. This task is challenging for large population studies where thousands of measurements are performed over a cohort of hundreds of subjects, resulting in a massive latent parameter space. This large cardinality renders off-the-shelf Variational Inference (VI) computationally impractical. In this work, we design structured VI families that can efficiently tackle large population studies. To this end, our main idea is to share the parameterization and learning across the different i.i.d. variables in a generative model -symbolized by the model's plates. We name this concept plate amortization, and illustrate the powerful synergies it entitles, resulting in expressive, parsimoniously parameterized and orders of magnitude faster to train large scale hierarchical variational distributions. We illustrate the practical utility of PAVI through a challenging Neuroimaging example featuring a million latent parameters, demonstrating a significant step towards scalable and expressive Variational Inference.