论文标题

尖头:迈向可解释和偏见的点云处理

PointMask: Towards Interpretable and Bias-Resilient Point Cloud Processing

论文作者

Taghanaki, Saeid Asgari, Hassani, Kaveh, Jayaraman, Pradeep Kumar, Khasahmadi, Amir Hosein, Custis, Tonya

论文摘要

深层分类器倾向于将一些歧视性输入变量与其目标函数联系起来,进而可能会损害其概括能力。为了解决这个问题,可以通过可解释性方法设计系统的实验和/或检查模型。在本文中,我们研究了在点云上运行的深层模型的这两种策略。我们提出了DointMask,这是一种用于点云模型中归因的模型可解释的信息 - 底层方法。端子掩体鼓励探索输入空间中的大多数变异因子,同时逐渐收敛到一般解决方案。更具体地说,端盖引入了一个正则化项,该项最小化了输入和用于掩盖无关变量的潜在特征之间的相互信息。我们表明,将尖端层与任意模型耦合可以辨别输入空间中的点,从而对预测分数有所帮助,从而导致可解释性。通过设计的偏置实验,我们还表明,由于其逐渐掩盖功能,我们提出的方法可有效地处理数据偏差。

Deep classifiers tend to associate a few discriminative input variables with their objective function, which in turn, may hurt their generalization capabilities. To address this, one can design systematic experiments and/or inspect the models via interpretability methods. In this paper, we investigate both of these strategies on deep models operating on point clouds. We propose PointMask, a model-agnostic interpretable information-bottleneck approach for attribution in point cloud models. PointMask encourages exploring the majority of variation factors in the input space while gradually converging to a general solution. More specifically, PointMask introduces a regularization term that minimizes the mutual information between the input and the latent features used to masks out irrelevant variables. We show that coupling a PointMask layer with an arbitrary model can discern the points in the input space which contribute the most to the prediction score, thereby leading to interpretability. Through designed bias experiments, we also show that thanks to its gradual masking feature, our proposed method is effective in handling data bias.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源