论文标题

深度学习可解释性的互助信息的强大估计器

A robust estimator of mutual information for deep learning interpretability

论文作者

Piras, Davide, Peiris, Hiranya V., Pontzen, Andrew, Lucie-Smith, Luisa, Guo, Ningyuan, Nord, Brian

论文摘要

我们开发了共同信息(MI)的使用,这是信息理论中公认的指标,以解释深度学习模型的内部运作。为了准确地从有限的样品中估算MI,我们提出GMM-MI(发音为$`$ $ jimmie $ $ $),这是一种基于高斯混合模型的算法,可以应用于离散和持续的设置。GMM-MI是计算上有效的,对超级公寓的尺寸是有效的,并为MI I-Mi Anderiatientations提供了良好的选择。 GMM-MI在地面真相MI上是已知的,将其性能与已建立的相互信息估计器进行了比较。潜在变量及其与相关的物理数量的关联,从而释放了潜在表示的解释性。

We develop the use of mutual information (MI), a well-established metric in information theory, to interpret the inner workings of deep learning models. To accurately estimate MI from a finite number of samples, we present GMM-MI (pronounced $``$Jimmie$"$), an algorithm based on Gaussian mixture models that can be applied to both discrete and continuous settings. GMM-MI is computationally efficient, robust to the choice of hyperparameters and provides the uncertainty on the MI estimate due to the finite sample size. We extensively validate GMM-MI on toy data for which the ground truth MI is known, comparing its performance against established mutual information estimators. We then demonstrate the use of our MI estimator in the context of representation learning, working with synthetic data and physical datasets describing highly non-linear processes. We train deep learning models to encode high-dimensional data within a meaningful compressed (latent) representation, and use GMM-MI to quantify both the level of disentanglement between the latent variables, and their association with relevant physical quantities, thus unlocking the interpretability of the latent representation. We make GMM-MI publicly available.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源