论文标题

有限高斯混合物的分布式学习

Distributed Learning of Finite Gaussian Mixtures

论文作者

Zhang, Qiong, Chen, Jiahua

论文摘要

信息技术的进步导致了非常大的数据集,这些数据集通常保存在不同的存储中心。必须对现有的统计方法进行调整,以克服所得的计算障碍,同时保留统计有效性和效率。分裂和诱导方法已在许多领域应用,包括分位数过程,回归分析,主要特征空间和指数式家庭。我们研究了有限高斯混合物的分布式学习的分裂和诱导方法。我们建议减少策略并制定有效的MM算法。新估计器被证明是一致的,并且在某些一般条件下保留了根N的一致性。基于模拟和现实世界数据的实验表明,如果后者是可行的,则根据完整数据集的全局估计器具有与全局估计器相当的统计性能。如果模型假设与现实世界中的数据不匹配,它甚至可以略高于全局估计器。与某些现有方法相比,它还具有更好的统计和计算性能。

Advances in information technology have led to extremely large datasets that are often kept in different storage centers. Existing statistical methods must be adapted to overcome the resulting computational obstacles while retaining statistical validity and efficiency. Split-and-conquer approaches have been applied in many areas, including quantile processes, regression analysis, principal eigenspaces, and exponential families. We study split-and-conquer approaches for the distributed learning of finite Gaussian mixtures. We recommend a reduction strategy and develop an effective MM algorithm. The new estimator is shown to be consistent and retains root-n consistency under some general conditions. Experiments based on simulated and real-world data show that the proposed split-and-conquer approach has comparable statistical performance with the global estimator based on the full dataset, if the latter is feasible. It can even slightly outperform the global estimator if the model assumption does not match the real-world data. It also has better statistical and computational performance than some existing methods.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源