论文标题
一种灵活的经验贝叶斯方法,用于多个线性回归和连接,并进行惩罚回归
A flexible empirical Bayes approach to multiple linear regression and connections with penalized regression
论文作者
论文摘要
我们引入了一种新的经验贝叶斯方法,用于大规模多线性回归。我们的方法结合了两个关键思想:(i)使用灵活的“自适应收缩”先验,该先验近似于正常分布的有限混合物,即正常分布的非参数家族; (ii)使用变分近似来有效估计先前的超参数并计算近似后期。将这两个想法结合起来会导致快速,灵活的方法,以及与快速惩罚回归方法(例如Lasso)相当的计算速度,并在各种场景中具有竞争性预测准确性。此外,我们提供了新的结果,以在经验贝叶斯方法和受惩罚方法之间建立概念上的联系。具体而言,我们表明,我们方法的后验是解决了惩罚的回归问题,并通过直接求解优化问题(而不是通过交叉验证来调整),从而从数据中学习了惩罚函数的形式。我们的方法是在r https://github.com/stephenslab/mr.ash.alpha的r软件包中实现的。
We introduce a new empirical Bayes approach for large-scale multiple linear regression. Our approach combines two key ideas: (i) the use of flexible "adaptive shrinkage" priors, which approximate the nonparametric family of scale mixture of normal distributions by a finite mixture of normal distributions; and (ii) the use of variational approximations to efficiently estimate prior hyperparameters and compute approximate posteriors. Combining these two ideas results in fast and flexible methods, with computational speed comparable to fast penalized regression methods such as the Lasso, and with competitive prediction accuracy across a wide range of scenarios. Further, we provide new results that establish conceptual connections between our empirical Bayes methods and penalized methods. Specifically, we show that the posterior mean from our method solves a penalized regression problem, with the form of the penalty function being learned from the data by directly solving an optimization problem (rather than being tuned by cross-validation). Our methods are implemented in an R package, mr.ash.alpha, available from https://github.com/stephenslab/mr.ash.alpha.