论文标题

通过部分特征去相关的分布概括

Out-of-distribution Generalization via Partial Feature Decorrelation

论文作者

Guo, Xin, Yu, Zhengxu, Xiang, Chao, Jin, Zhongming, Huang, Jianqiang, Cai, Deng, He, Xiaofei, Hua, Xian-Sheng

论文摘要

大多数基于深度学习的图像分类方法都假定所有样本均在独立且分布的(IID)设置下生成。但是,分布(OOD)的概括在实践中更为普遍,这意味着训练和测试环境之间的不可知目的上下文分布变化。为了解决这个问题,我们提出了一种新型的部分特征去相关学习(PFDL)算法,该算法共同优化了特征分解网络和目标图像分类模型。功能分解网络将特征嵌入到独立和相关部件中,以便突出显示功能之间的相关性。然后,相关的功能通过在优化图像分类模型的同时将突出显示的相关性去授权来帮助学习稳定的特征表示。我们验证合成数据集上特征分解网络的相关建模能力。现实世界数据集上的实验表明,我们的方法可以提高骨干模型在OOD图像分类数据集上的精度。

Most deep-learning-based image classification methods assume that all samples are generated under an independent and identically distributed (IID) setting. However, out-of-distribution (OOD) generalization is more common in practice, which means an agnostic context distribution shift between training and testing environments. To address this problem, we present a novel Partial Feature Decorrelation Learning (PFDL) algorithm, which jointly optimizes a feature decomposition network and the target image classification model. The feature decomposition network decomposes feature embeddings into the independent and the correlated parts such that the correlations between features will be highlighted. Then, the correlated features help learn a stable feature representation by decorrelating the highlighted correlations while optimizing the image classification model. We verify the correlation modeling ability of the feature decomposition network on a synthetic dataset. The experiments on real-world datasets demonstrate that our method can improve the backbone model's accuracy on OOD image classification datasets.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源