论文标题

使用铰链正则化的最佳传输实现分类的鲁棒性

Achieving robustness in classification using optimal transport with hinge regularization

论文作者

Serrurier, Mathieu, Mamalet, Franck, González-Sanz, Alberto, Boissin, Thibaut, Loubes, Jean-Michel, del Barrio, Eustasio

论文摘要

对抗性的例子指出了深层神经网络易受局部噪音的脆弱性。已经表明,限制他们的Lipschitz常数应该增强鲁棒性,但使它们更难通过经典的损失功能学习。我们根据最佳运输提出了一个新的二进制分类框架,该框架将这种Lipschitz的约束整合为理论要求。我们建议使用新的损失来学习1-lipschitz网络,该损失是Wasserstein距离估计的Kantorovich-Rubinstein双重配方的铰链正规化版本。此损失函数在对抗性鲁棒性方面具有直接的解释以及可证明的鲁棒性界限。我们还证明,该铰链正则化版本仍然是最佳运输问题的双重公式,并且具有解决方案。我们还建立了该最佳解决方案的几种几何特性,并扩展了多级问题的方法。实验表明,所提出的方法可以从鲁棒性方面提供预期的保证,而无需任何明显的精度下降。在提出的模型上,对抗性示例明显而有意义地更改输入,为分类提供了解释。

Adversarial examples have pointed out Deep Neural Networks vulnerability to small local noise. It has been shown that constraining their Lipschitz constant should enhance robustness, but make them harder to learn with classical loss functions. We propose a new framework for binary classification, based on optimal transport, which integrates this Lipschitz constraint as a theoretical requirement. We propose to learn 1-Lipschitz networks using a new loss that is an hinge regularized version of the Kantorovich-Rubinstein dual formulation for the Wasserstein distance estimation. This loss function has a direct interpretation in terms of adversarial robustness together with certifiable robustness bound. We also prove that this hinge regularized version is still the dual formulation of an optimal transportation problem, and has a solution. We also establish several geometrical properties of this optimal solution, and extend the approach to multi-class problems. Experiments show that the proposed approach provides the expected guarantees in terms of robustness without any significant accuracy drop. The adversarial examples, on the proposed models, visibly and meaningfully change the input providing an explanation for the classification.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源