论文标题

可平行的多目标贝叶斯优化的可微分预期超量改进

Differentiable Expected Hypervolume Improvement for Parallel Multi-Objective Bayesian Optimization

论文作者

Daulton, Samuel, Balandat, Maximilian, Bakshy, Eytan

论文摘要

在许多实际情况下,决策者试图以样本有效的方式有效地优化多个竞争目标。多目标贝叶斯优化(BO)是一种常见的方法,但是许多表现最佳的采集功能没有知道的分析梯度,并且遭受了高计算开销的困扰。我们利用预期的Hypervolume改进(EHVI)(EHVI)来利用编程模型和硬件加速度的最新进展---以其较高的计算复杂性而臭名昭著的算法。我们得出了一种新型的Q指示超量改进(QEHVI)的公式,这是一种将EHVI扩展到平行,约束的评估设置的采集函数。 QEHVI是Q新候选点的关节EHVI(直至蒙特卡洛(MC)集成误差)的精确计算。尽管以前的EHVI公式依赖于无梯度的采集优化或近似梯度,但我们通过自动差异来计算MC估计器的精确梯度,从而使用一阶和Quasi-Secect-dord-dord-serd方法实现了有效和有效的优化。我们的经验评估表明,在许多实际情况下,Qehvi在计算方面都是可探讨的,并且在其壁时间的一小部分中优于最先进的多目标BO算法。

In many real-world scenarios, decision makers seek to efficiently optimize multiple competing objectives in a sample-efficient fashion. Multi-objective Bayesian optimization (BO) is a common approach, but many of the best-performing acquisition functions do not have known analytic gradients and suffer from high computational overhead. We leverage recent advances in programming models and hardware acceleration for multi-objective BO using Expected Hypervolume Improvement (EHVI)---an algorithm notorious for its high computational complexity. We derive a novel formulation of q-Expected Hypervolume Improvement (qEHVI), an acquisition function that extends EHVI to the parallel, constrained evaluation setting. qEHVI is an exact computation of the joint EHVI of q new candidate points (up to Monte-Carlo (MC) integration error). Whereas previous EHVI formulations rely on gradient-free acquisition optimization or approximated gradients, we compute exact gradients of the MC estimator via auto-differentiation, thereby enabling efficient and effective optimization using first-order and quasi-second-order methods. Our empirical evaluation demonstrates that qEHVI is computationally tractable in many practical scenarios and outperforms state-of-the-art multi-objective BO algorithms at a fraction of their wall time.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源