论文标题

高斯过程中的预期改进算法的遗憾界限

Regret Bounds for Expected Improvement Algorithms in Gaussian Process Bandit Optimization

论文作者

Tran-The, Hung, Gupta, Sunil, Rana, Santu, Venkatesh, Svetha

论文摘要

预期改进(EI)算法是由于其简单性和效率而在不确定性下进行优化的最流行策略之一。尽管它很受欢迎,但该算法的理论方面尚未得到正确分析。特别是,无论在嘈杂的环境中,具有标准现任的EI策略是否融合仍然是高斯工艺强盗优化问题的一个悬而未决的问题。我们的目标是通过提出通过GP预测均值定义的标准现有人物的EI变体来回答这个问题。我们证明了我们的算法会收敛,并实现了$ \ Mathcal O(γ_T\ sqrt {t})$的累积遗憾,其中$γ_T$是$ t $观察值和高斯流程模型之间的最大信息增益。基于EI的这种变体,我们进一步提出了一种称为改进的GP-EI的算法,该算法比以前的对应物更快地收敛。特别是,我们提出的EI变体不需要像以前的作品那样了解RKHS Norm和噪声的次高西度参数。与几种基准相比,我们论文中的经验验证证明了我们的算法的有效性。

The expected improvement (EI) algorithm is one of the most popular strategies for optimization under uncertainty due to its simplicity and efficiency. Despite its popularity, the theoretical aspects of this algorithm have not been properly analyzed. In particular, whether in the noisy setting, the EI strategy with a standard incumbent converges is still an open question of the Gaussian process bandit optimization problem. We aim to answer this question by proposing a variant of EI with a standard incumbent defined via the GP predictive mean. We prove that our algorithm converges, and achieves a cumulative regret bound of $\mathcal O(γ_T\sqrt{T})$, where $γ_T$ is the maximum information gain between $T$ observations and the Gaussian process model. Based on this variant of EI, we further propose an algorithm called Improved GP-EI that converges faster than previous counterparts. In particular, our proposed variants of EI do not require the knowledge of the RKHS norm and the noise's sub-Gaussianity parameter as in previous works. Empirical validation in our paper demonstrates the effectiveness of our algorithms compared to several baselines.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源