论文标题
TrustNet:从受信任的数据中学习(a)对称标签噪声
TrustNet: Learning from Trusted Data Against (A)symmetric Label Noise
论文作者
论文摘要
标签噪声的鲁棒性是在大规模数据集中训练的弱监督分类器的关键属性。标签噪声的鲁棒性是在大规模数据集中训练的弱监督分类器的关键属性。在本文中,我们首先针对任何给定的噪声模式得出分析。基于洞察力,我们设计了首先从一小组受信任的数据中从对称性或不对称的噪声腐败的模式中不利地学习的信任网络。然后,通过强大的损耗函数对TrustNet进行了训练,该功能将给定标签加权与从学习的噪声模式中的推断标签相加。根据训练时期的模型不确定性,对重量进行调整。我们评估CIFAR-10和CIFAR-100的合成标签噪声的TrustNet,以及带有标签噪声的现实世界数据,即服装1M。我们将与最新方法进行比较,该方法证明了在各种噪声模式下的信任网络的强大鲁棒性。
Robustness to label noise is a critical property for weakly-supervised classifiers trained on massive datasets. Robustness to label noise is a critical property for weakly-supervised classifiers trained on massive datasets. In this paper, we first derive analytical bound for any given noise patterns. Based on the insights, we design TrustNet that first adversely learns the pattern of noise corruption, being it both symmetric or asymmetric, from a small set of trusted data. Then, TrustNet is trained via a robust loss function, which weights the given labels against the inferred labels from the learned noise pattern. The weight is adjusted based on model uncertainty across training epochs. We evaluate TrustNet on synthetic label noise for CIFAR-10 and CIFAR-100, and real-world data with label noise, i.e., Clothing1M. We compare against state-of-the-art methods demonstrating the strong robustness of TrustNet under a diverse set of noise patterns.