论文标题
用于加速粗粒优化的多种学习
Manifold Learning for Accelerating Coarse-Grained Optimization
论文作者
论文摘要
提出的用于解决高维优化问题而没有衍生信息的算法经常遇到“维度的诅咒”,随着参数空间的维度的增长而变得无效。这些问题的子类的一个特征实际上是低维度的特征是,只有少数参数(或其组合)对于优化很重要,并且必须详细探讨。事先了解这些参数/组合将大大简化问题及其解决方案。我们提出了根据具有“内部”优化算法的简短模拟突发集合获得的数据,该数据驱动了有效(粗粒,“趋势”)优化器的构造,该数据有可能加速参数空间的探索。这种“有效优化器”的轨迹迅速被吸引到一个慢速的歧管上,该参数被少数相关的参数组合所吸引。我们在模拟(内部优化器迭代)爆发集合的结果上使用数据挖掘/流形学习技术在果态上获得了这种低维,有效优化的歧管的参数化,并将其本地利用以沿该歧管向前“跳跃”。结果,我们可以将参数空间的探索偏向少数重要的方向,并通过这种“包装算法”加快传统优化算法的收敛性。
Algorithms proposed for solving high-dimensional optimization problems with no derivative information frequently encounter the "curse of dimensionality," becoming ineffective as the dimension of the parameter space grows. One feature of a subclass of such problems that are effectively low-dimensional is that only a few parameters (or combinations thereof) are important for the optimization and must be explored in detail. Knowing these parameters/ combinations in advance would greatly simplify the problem and its solution. We propose the data-driven construction of an effective (coarse-grained, "trend") optimizer, based on data obtained from ensembles of brief simulation bursts with an "inner" optimization algorithm, that has the potential to accelerate the exploration of the parameter space. The trajectories of this "effective optimizer" quickly become attracted onto a slow manifold parameterized by the few relevant parameter combinations. We obtain the parameterization of this low-dimensional, effective optimization manifold on the fly using data mining/manifold learning techniques on the results of simulation (inner optimizer iteration) burst ensembles and exploit it locally to "jump" forward along this manifold. As a result, we can bias the exploration of the parameter space towards the few, important directions and, through this "wrapper algorithm," speed up the convergence of traditional optimization algorithms.