论文标题

对比度损失和解决方案缓存,以进行预测和优化

Contrastive Losses and Solution Caching for Predict-and-Optimize

论文作者

Mulamba, Maxime, Mandi, Jayanta, Diligenti, Michelangelo, Lombardi, Michele, Bucarey, Victor, Guns, Tias

论文摘要

许多决策过程涉及解决一个可以从历史数据估算的不确定输入的组合优化问题。最近,该课程中的问题已通过端到端学习方法成功解决,该方法依赖于每个时期的每个培训实例解决一个优化问题。在这种情况下,我们提供了两个不同的贡献。首先,我们使用一种噪声对比方法来激励一个基于将非最佳解决方案视为负面示例的替代损失功能的家族。其次,我们解决了所有预测和优化方法的主要瓶颈,即需要在训练时经常重新计算最佳解决方案。这是通过解决方案缓存方案以及通过解决方案缓存中的查找来替换优化调用来完成的。该方法正式基于可行空间的内部近似值,结合了缓存查找策略,在训练时间和损失近似的准确性之间提供了可控制的权衡。我们从经验上表明,即使是非常缓慢的增长率也足以以计算成本的一小部分匹配最先进方法的质量。

Many decision-making processes involve solving a combinatorial optimization problem with uncertain input that can be estimated from historic data. Recently, problems in this class have been successfully addressed via end-to-end learning approaches, which rely on solving one optimization problem for each training instance at every epoch. In this context, we provide two distinct contributions. First, we use a Noise Contrastive approach to motivate a family of surrogate loss functions, based on viewing non-optimal solutions as negative examples. Second, we address a major bottleneck of all predict-and-optimize approaches, i.e. the need to frequently recompute optimal solutions at training time. This is done via a solver-agnostic solution caching scheme, and by replacing optimization calls with a lookup in the solution cache. The method is formally based on an inner approximation of the feasible space and, combined with a cache lookup strategy, provides a controllable trade-off between training time and accuracy of the loss approximation. We empirically show that even a very slow growth rate is enough to match the quality of state-of-the-art methods, at a fraction of the computational cost.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源