论文标题
Lipschitz的弦频稀疏性深度神经网络持续估计
Chordal Sparsity for Lipschitz Constant Estimation of Deep Neural Networks
论文作者
论文摘要
神经网络的Lipschitz常数可以保证图像分类,控制器设计的安全性以及训练数据以外的可推广性。由于计算Lipschitz常数是NP-HARD,因此估算Lipschitz常数的技术必须在可伸缩性和准确性之间导致权衡取舍。在这项工作中,我们显着推动了一种被称为LIPSDP的半决赛编程技术的可扩展性边界,同时实现了零精度损失。我们首先表明LIPSDP具有和弦稀疏性,这使我们能够得出一个弦稀疏的配方,称为Chordal-LipsDP。关键的好处是,LIPSDP的主要计算瓶颈(一种大型半限制约束)现在被分解为等效较小的瓶颈:允许Chordal-LIPSDP尤其是随着网络深度的增长而超过LIPSDP。此外,我们的配方使用可调的稀疏参数,使人们能够在不产生大量计算成本的情况下获得更严格的估计。我们通过广泛的数值实验说明了方法的可伸缩性。
Lipschitz constants of neural networks allow for guarantees of robustness in image classification, safety in controller design, and generalizability beyond the training data. As calculating Lipschitz constants is NP-hard, techniques for estimating Lipschitz constants must navigate the trade-off between scalability and accuracy. In this work, we significantly push the scalability frontier of a semidefinite programming technique known as LipSDP while achieving zero accuracy loss. We first show that LipSDP has chordal sparsity, which allows us to derive a chordally sparse formulation that we call Chordal-LipSDP. The key benefit is that the main computational bottleneck of LipSDP, a large semidefinite constraint, is now decomposed into an equivalent collection of smaller ones: allowing Chordal-LipSDP to outperform LipSDP particularly as the network depth grows. Moreover, our formulation uses a tunable sparsity parameter that enables one to gain tighter estimates without incurring a significant computational cost. We illustrate the scalability of our approach through extensive numerical experiments.