论文标题

具有自适应正则化的迭代硬阈值:稀疏的解决方案而无需牺牲运行时

Iterative Hard Thresholding with Adaptive Regularization: Sparser Solutions Without Sacrificing Runtime

论文作者

Axiotis, Kyriakos, Sviridenko, Maxim

论文摘要

我们对迭代硬阈值(IHT)算法提出了简单的修改,该算法恢复了渐近稀疏的溶液作为条件数的函数。当旨在最大程度地减少凸函数$ f(x)$,条件号$κ$受$ x $为$ x $ -s $ -sparse矢量,标准IHT保证是一种解决方案,具有放松的稀疏$ O(Sκ^2)$,而我们拟议的算法,我们建议的算法,正常化的IHT,返回Sparsity $ o(sbam)$(sbars)的解决方案。我们的算法比ARHT显着改善,这也发现了稀疏$ O(Sκ)$的解决方案,因为它不需要在每次迭代中重新占用(因此更快),这是确定性的,并且不需要了解最佳解决方案值$ f(x^*)$或最佳的sparsity级别$ s $ $ s $ s $。我们的主要技术工具是一个自适应正则化框架,其中算法逐渐学习了$ \ ell_2 $正则化项的权重,这将使融合到更稀疏的解决方案。我们还将此框架应用于低级优化,在此框架中,我们可以从$κ^2 $到$κ$方面取得类似的改善。

We propose a simple modification to the iterative hard thresholding (IHT) algorithm, which recovers asymptotically sparser solutions as a function of the condition number. When aiming to minimize a convex function $f(x)$ with condition number $κ$ subject to $x$ being an $s$-sparse vector, the standard IHT guarantee is a solution with relaxed sparsity $O(sκ^2)$, while our proposed algorithm, regularized IHT, returns a solution with sparsity $O(sκ)$. Our algorithm significantly improves over ARHT which also finds a solution of sparsity $O(sκ)$, as it does not require re-optimization in each iteration (and so is much faster), is deterministic, and does not require knowledge of the optimal solution value $f(x^*)$ or the optimal sparsity level $s$. Our main technical tool is an adaptive regularization framework, in which the algorithm progressively learns the weights of an $\ell_2$ regularization term that will allow convergence to sparser solutions. We also apply this framework to low rank optimization, where we achieve a similar improvement of the best known condition number dependence from $κ^2$ to $κ$.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源