论文标题
封闭捕获捕获模型的惩罚经验可能性估计和EM算法
Penalized empirical likelihood estimation and EM algorithms for closed-population capture-recapture models
论文作者
论文摘要
捕获征收实验被广泛用于估计有限种群的丰度。基于捕获重心数据,已证明经验可能性(EL)方法表现优于常规条件可能性(CL)方法。但是,当前有关EL丰度估计的文献忽略了行为效应,EL估计可能不稳定,尤其是在捕获概率较低时。我们在本文中做出了三项贡献。首先,我们将EL方法扩展到捕获行为效应的捕获模型。其次,为了克服EL方法的不稳定性,我们提出了一种惩罚EL(PEL)估计方法,以惩罚大量丰度值。然后,我们研究了最大PEL估计器和PEL比率统计量的渐近学。第三,我们开发了PEL的标准期望最大化(EM)算法,以提高其实际性能。 EM算法也适用于EL和CL,并进行轻微的修改。我们的仿真和现实世界数据分析表明,与现有优化算法相比,PEL方法成功克服了EL方法的不稳定性和所提出的EM算法会产生更可靠的结果。
Capture-recapture experiments are widely used to estimate the abundance of a finite population. Based on capture-recapture data, the empirical likelihood (EL) method has been shown to outperform the conventional conditional likelihood (CL) method. However, the current literature on EL abundance estimation ignores behavioral effects, and the EL estimates may not be stable, especially when the capture probability is low. We make three contributions in this paper. First, we extend the EL method to capture-recapture models that account for behavioral effects. Second, to overcome the instability of the EL method, we propose a penalized EL (PEL) estimation method that penalizes large abundance values. We then investigate the asymptotics of the maximum PEL estimator and the PEL ratio statistic. Third, we develop standard expectation-maximization (EM) algorithms for PEL to improve its practical performance. The EM algorithm is also applicable to EL and CL with slight modifications. Our simulation and a real-world data analysis demonstrate that the PEL method successfully overcomes the instability of the EL method and the proposed EM algorithm produces more reliable results than existing optimization algorithms.