论文标题

具有控制梯度近似误差的稀疏离散马尔可夫随机场的随机学习

Stochastic Learning for Sparse Discrete Markov Random Fields with Controlled Gradient Approximation Error

论文作者

Geng, Sinong, Kuang, Zhaobin, Liu, Jie, Wright, Stephen, Page, David

论文摘要

我们研究离散马尔可夫随机场(MRFS)的$ l_1 $最大似然估计器/估计量(MLE)问题,其中有效且可扩展的学习需要稀疏的正则化和近似推断。为了应对这些挑战,我们考虑了一个随机学习框架,称为随机近端梯度(SPG; Honorio 2012a,Atchade等,2014,Miasojedow和Rejchel 2016)。 SPG是一种不精确的近端梯度算法[Schmidtet al。,2011],其不精确性源于随机的甲骨文(Gibbs采样)进行梯度近似 - 由于NP -HARD的MRFS [Koller和Freied Many和Freied Many,2009年,确切的梯度评估在一般情况下都是不可避免的。从理论上讲,我们提供了新颖的可验证界限来检查和控制梯度近似的质量。从经验上讲,我们基于可验证的界限,提高SPG的性能,提出渐近(TAY)学习策略。

We study the $L_1$-regularized maximum likelihood estimator/estimation (MLE) problem for discrete Markov random fields (MRFs), where efficient and scalable learning requires both sparse regularization and approximate inference. To address these challenges, we consider a stochastic learning framework called stochastic proximal gradient (SPG; Honorio 2012a, Atchade et al. 2014,Miasojedow and Rejchel 2016). SPG is an inexact proximal gradient algorithm [Schmidtet al., 2011], whose inexactness stems from the stochastic oracle (Gibbs sampling) for gradient approximation - exact gradient evaluation is infeasible in general due to the NP-hard inference problem for discrete MRFs [Koller and Friedman, 2009]. Theoretically, we provide novel verifiable bounds to inspect and control the quality of gradient approximation. Empirically, we propose the tighten asymptotically (TAY) learning strategy based on the verifiable bounds to boost the performance of SPG.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源