论文标题

单调性正规化:改善了惩罚和新颖的应用程序,以解开表示形式学习和强大的分类

Monotonicity Regularization: Improved Penalties and Novel Applications to Disentangled Representation Learning and Robust Classification

论文作者

Monteiro, Joao, Ahmed, Mohamed Osama, Hajimirsadeghi, Hossein, Mori, Greg

论文摘要

我们研究设置,其中使用梯度惩罚与风险最小化一起使用,以获取满足不同单调概念的预测因素。具体来说,我们提出了两组贡献。在本文的第一部分中,我们表明惩罚的不同选择定义了观察到该属性的输入空间区域。因此,以前的方法导致仅在一小卷输入空间中单调的模型。因此,我们提出了一种使用训练实例和随机点的混合物来填充空间并在更大更大的地区强制惩罚的方法。作为第二组贡献,我们介绍了正规化策略,以在不同的环境中执行其他单调性概念。在这种情况下,我们考虑应用程序(例如图像分类和生成建模),其中单调性不是硬性约束,但可以帮助改善模型的某些方面。也就是说,我们表明诱导单调性在以下应用中可能是有益的:(1)允许可控的数据生成,(2)定义检测异常数据的策略,以及(3)生成预测的解释。我们提出的方法不会引入相关的计算开销,同时导致有效的程序可为基线模型提供额外的好处。

We study settings where gradient penalties are used alongside risk minimization with the goal of obtaining predictors satisfying different notions of monotonicity. Specifically, we present two sets of contributions. In the first part of the paper, we show that different choices of penalties define the regions of the input space where the property is observed. As such, previous methods result in models that are monotonic only in a small volume of the input space. We thus propose an approach that uses mixtures of training instances and random points to populate the space and enforce the penalty in a much larger region. As a second set of contributions, we introduce regularization strategies that enforce other notions of monotonicity in different settings. In this case, we consider applications, such as image classification and generative modeling, where monotonicity is not a hard constraint but can help improve some aspects of the model. Namely, we show that inducing monotonicity can be beneficial in applications such as: (1) allowing for controllable data generation, (2) defining strategies to detect anomalous data, and (3) generating explanations for predictions. Our proposed approaches do not introduce relevant computational overhead while leading to efficient procedures that provide extra benefits over baseline models.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源