论文标题
随机交替算法的收敛速率,以进行双向优化
Convergence rates of the stochastic alternating algorithm for bi-objective optimization
论文作者
论文摘要
在优化两个相互冲突的功能时,考虑了用于双目标优化的随机交替算法,以便为每个函数分别应用优化步骤。这种算法包括在每个迭代中的每个单个目标上应用一定数量的梯度或亚级别下降步骤。在本文中,我们表明,随机交替的算法达到了$ \ Mathcal {O}(1/t)$的额定性收敛速率(在强凸度下),以确定两个函数的加权效果的最小化器,该功能由每个函数的加权效果的最小化。提出了凸情况下的扩展名,该速率削弱了$ \ Mathcal {o}(1/\ sqrt {t})$。在非平滑案例中,这些费率也有效。重要的是,通过改变应用于每个函数的步骤的比例,可以确定对帕累托正面的近似值。
Stochastic alternating algorithms for bi-objective optimization are considered when optimizing two conflicting functions for which optimization steps have to be applied separately for each function. Such algorithms consist of applying a certain number of steps of gradient or subgradient descent on each single objective at each iteration. In this paper, we show that stochastic alternating algorithms achieve a sublinear convergence rate of $\mathcal{O}(1/T)$, under strong convexity, for the determination of a minimizer of a weighted-sum of the two functions, parameterized by the number of steps applied on each of them. An extension to the convex case is presented for which the rate weakens to $\mathcal{O}(1/\sqrt{T})$. These rates are valid also in the non-smooth case. Importantly, by varying the proportion of steps applied to each function, one can determine an approximation to the Pareto front.