论文标题

非平滑非凸功能约束优化的一阶方法,有或不带有slater点

First-Order Methods for Nonsmooth Nonconvex Functional Constrained Optimization with or without Slater Points

论文作者

Jia, Zhichao, Grimmer, Benjamin

论文摘要

在许多学习和数据科学设置中,目标和约束都可能是非平滑的,而限制的优化问题。在本文中,我们显示了任何Lipschitz,弱凸的目标和约束,一种简单的一阶方法可以找到可行的,$ε$ - 定位点,而收敛速率为$ O(ε^{ - 4})$,而无需依赖紧凑或约束资格(CQ)。当CQ保持时,通过大致满足Karush-Kuhn-Tucker条件来测量这种收敛性。当CQ失败时,我们保证达到较弱的Fritz-John条件。作为一个说明性的例子,尽管经常违反约束资格,但我们的方法在分段二次SCAD正则问题上稳定收敛。所考虑的算法类似于Ma等人的“弱凸优化弱凸优化的四级正规化亚级别方法”。 Boob等人的“凸和非凸功能约束优化的随机一阶方法”。 (其保证进一步假定紧凑性和CQ),迭代地采取不精确的近端步骤,通过将开关亚级别方法应用于强大的凸起的限制子问题,计算出内部循环。我们对开关亚级别方法的非lipschitz分析似乎是新的,并且可能具有独立的兴趣。

Constrained optimization problems where both the objective and constraints may be nonsmooth and nonconvex arise across many learning and data science settings. In this paper, we show for any Lipschitz, weakly convex objectives and constraints, a simple first-order method finds a feasible, $ε$-stationary point at a convergence rate of $O(ε^{-4})$ without relying on compactness or Constraint Qualification (CQ). When CQ holds, this convergence is measured by approximately satisfying the Karush-Kuhn-Tucker conditions. When CQ fails, we guarantee the attainment of weaker Fritz-John conditions. As an illustrative example, our method stably converges on piecewise quadratic SCAD regularized problems despite frequent violations of constraint qualification. The considered algorithm is similar to those of "Quadratically regularized subgradient methods for weakly convex optimization with weakly convex constraints" by Ma et al. and "Stochastic first-order methods for convex and nonconvex functional constrained optimization" by Boob et al. (whose guarantees further assume compactness and CQ), iteratively taking inexact proximal steps, computed via an inner loop applying a switching subgradient method to a strongly convex constrained subproblem. Our non-Lipschitz analysis of the switching subgradient method appears to be new and may be of independent interest.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源