论文标题

在给定数据参数的LASSO程序的最佳选择上

On the best choice of Lasso program given data parameters

论文作者

Berk, Aaron, Plan, Yaniv, Yilmaz, Özgür

论文摘要

广义压缩传感(GCS)是一个范式,其中可以从随机,不确定和损坏的线性测量值中恢复结构化的高维信号。普遍的拉索(GL)程序对于解决GCS问题有效,由于其可靠的能力利用了基本信号结构。从某种意义上说,三个流行的GL程序等效,有时可以互换使用。通过管理参数调整,每个参数都接受了最佳参数选择。对于稀疏或低级别的信号结构,此选择会产生最小订单最佳误差。尽管GC经过了良好的研究,但现有的GL程序理论通常涉及此最佳调整设置。但是,GL程序的最佳参数值取决于数据的属性,并且在实际设置中通常未知。因此,经验问题的性能取决于程序的参数敏感性:希望有关最佳参数选择的微小变化会使最佳风险差异很小。我们检查了这三个程序的风险,并证明它们的参数灵敏度在相同的数据中可能会有所不同。我们证明,量规受限的GL程序在限制低噪声方面承认其风险的渐近尖端行为。我们证明,残留受约束的套索计划在非常稀疏的载体上渐近地次优风险。这些结果对不受约束的套索程序的对比观察结果,该计划对其参数选择相对较小。我们通过数值模拟来支持渐近理论,表明对于甚至适度的维度参数,很容易观察到GL程序的参数敏感性。重要的是,这些模拟证明了GL程序对其参数选择的敏感性,尽管其他两个都没有。我们希望这项工作有助于实践者为他们的问题选择GL程序。

Generalized compressed sensing (GCS) is a paradigm in which a structured high-dimensional signal may be recovered from random, under-determined, and corrupted linear measurements. Generalized Lasso (GL) programs are effective for solving GCS problems due to their proven ability to leverage underlying signal structure. Three popular GL programs are equivalent in a sense and sometimes used interchangeably. Tuned by a governing parameter, each admit an optimal parameter choice. For sparse or low-rank signal structures, this choice yields minimax order-optimal error. While GCS is well-studied, existing theory for GL programs typically concerns this optimally tuned setting. However, the optimal parameter value for a GL program depends on properties of the data, and is typically unknown in practical settings. Performance in empirical problems thus hinges on a program's parameter sensitivity: it is desirable that small variation about the optimal parameter choice begets small variation about the optimal risk. We examine the risk for these three programs and demonstrate that their parameter sensitivity can differ for the same data. We prove a gauge-constrained GL program admits asymptotic cusp-like behaviour of its risk in the limiting low-noise regime. We prove that a residual-constrained Lasso program has asymptotically suboptimal risk for very sparse vectors. These results contrast observations about an unconstrained Lasso program, which is relatively less sensitive to its parameter choice. We support the asymptotic theory with numerical simulations, demonstrating that parameter sensitivity of GL programs is readily observed for even modest dimensional parameters. Importantly, these simulations demonstrate regimes in which a GL program exhibits sensitivity to its parameter choice, though the other two do not. We hope this work aids practitioners in selecting a GL program for their problem.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源