论文标题

插入最大化随机近似,并应用于广义线性混合模型

Imputation Maximization Stochastic Approximation with Application to Generalized Linear Mixed Models

论文作者

Song, Zexi, Tan, Zhiqiang

论文摘要

广义线性混合模型可用于研究具有非高斯响应的分层数据。但是,似然函数的棘手性为估计带来了挑战。我们开发了一种适合此问题的新方法,称为插定最大化随机近似(IMSA)。对于每次迭代,IMSA首先会引起潜在变量/随机效应,然后在完整的数据可能性上最大化,并最终将估计值移向新的最大化器,同时保留了先前值的一部分。 IMSA的限制点满足了自洽性属性,并且在有限样本中可能比通过基于得分方程式的随机近似(SCORESA)求解的最大似然估计仪的偏差要小。从数值上讲,IMSA在达到更稳定的收敛性和尊重各种变换(例如非负方差组件)下的参数范围方面也可能是优于SCORESA的优势。通过我们的仿真研究,IMSA始终优于SCORESA来证实这一点。

Generalized linear mixed models are useful in studying hierarchical data with possibly non-Gaussian responses. However, the intractability of likelihood functions poses challenges for estimation. We develop a new method suitable for this problem, called imputation maximization stochastic approximation (IMSA). For each iteration, IMSA first imputes latent variables/random effects, then maximizes over the complete data likelihood, and finally moves the estimate towards the new maximizer while preserving a proportion of the previous value. The limiting point of IMSA satisfies a self-consistency property and can be less biased in finite samples than the maximum likelihood estimator solved by score-equation based stochastic approximation (ScoreSA). Numerically, IMSA can also be advantageous over ScoreSA in achieving more stable convergence and respecting the parameter ranges under various transformations such as nonnegative variance components. This is corroborated through our simulation studies where IMSA consistently outperforms ScoreSA.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源