论文标题
通过使用按样本梯度来缓解数据集偏差
Mitigating Dataset Bias by Using Per-sample Gradient
论文作者
论文摘要
深度神经网络的性能受到培训数据集设置的强烈影响。特别是,当存在与目标属性有很强相关性的属性时,受过训练的模型可以提供意外的偏见并显示出明显的推理错误(即数据集偏置问题)。已经提出了各种方法来减轻数据集偏差,它们的重点是弱相关的样本,称为偏置冲突样本。这些方法基于涉及人类或经验相关指标(例如培训损失)的明确偏差标签。但是,这样的指标需要人为成本或没有足够的理论解释。在这项研究中,我们提出了一种称为PGD(根据样本的基于梯度的DEBIASing)的偏见算法,该算法包括三个步骤:(1)培训有关均匀批次采样的模型,(2)设置每个样本以样本梯度的标准的重要性,以及(3)使用“值行批量批量采样”的模型训练该模型的概率,该模型的概率为2(2)。与各种合成和现实世界数据集的现有基准相比,提出的方法显示了分类任务的最新精度。此外,我们描述了有关PGD如何减轻数据集偏差的理论理解。
The performance of deep neural networks is strongly influenced by the training dataset setup. In particular, when attributes having a strong correlation with the target attribute are present, the trained model can provide unintended prejudgments and show significant inference errors (i.e., the dataset bias problem). Various methods have been proposed to mitigate dataset bias, and their emphasis is on weakly correlated samples, called bias-conflicting samples. These methods are based on explicit bias labels involving human or empirical correlation metrics (e.g., training loss). However, such metrics require human costs or have insufficient theoretical explanation. In this study, we propose a debiasing algorithm, called PGD (Per-sample Gradient-based Debiasing), that comprises three steps: (1) training a model on uniform batch sampling, (2) setting the importance of each sample in proportion to the norm of the sample gradient, and (3) training the model using importance-batch sampling, whose probability is obtained in step (2). Compared with existing baselines for various synthetic and real-world datasets, the proposed method showed state-of-the-art accuracy for a the classification task. Furthermore, we describe theoretical understandings about how PGD can mitigate dataset bias.