论文标题

各种决策边界的损失函数熵正规化

Loss Function Entropy Regularization for Diverse Decision Boundaries

论文作者

Chong, Sue Sin

论文摘要

是否可以训练多个分类器执行有意义的众包,以产生没有地面真相注释的更好的预测标签?本文将修改对比的学习目标,以自动训练自我补充的合奏,以在CIFAR10和CIFAR100-20任务上产生最新的预测。本文将提出一种简单的方法,以修改单个无监督的分类管道,以自动生成具有不同决策边界的神经网络的合奏,以学习更广泛的类别集。损失函数熵正规化(LFER)是添加到训练和对比的学习损失函数中的正规化项。 LFER是修改无监督学习的输出空间的熵状态的一种装备,从而使神经网络决策边界的潜在表示多样化。经过培训的合奏对于在决策边界附近的样本中具有更高的成功预测准确性。 LFER是扰动决策界限的适当装备,并生产了在对比度学习阶段击败最先进的分类器。实验表明,FER可以产生与最先进的合奏,但具有不同的潜在决策边界。它使我们能够对近乎决策边界的样本进行有意义的验证,从而鼓励对近似样品的正确分类。通过在训练有素的神经网络集合中对单个样本正确预测的概率加剧,我们的方法可以通过降低和确认正确的特征映射来改善单个分类器。

Is it possible to train several classifiers to perform meaningful crowd-sourcing to produce a better prediction label set without ground-truth annotation? This paper will modify the contrastive learning objectives to automatically train a self-complementing ensemble to produce a state-of-the-art prediction on the CIFAR10 and CIFAR100-20 tasks. This paper will present a straightforward method to modify a single unsupervised classification pipeline to automatically generate an ensemble of neural networks with varied decision boundaries to learn a more extensive feature set of classes. Loss Function Entropy Regularization (LFER) are regularization terms to be added to the pre-training and contrastive learning loss functions. LFER is a gear to modify the entropy state of the output space of unsupervised learning, thereby diversifying the latent representation of decision boundaries of neural networks. Ensemble trained with LFER has higher successful prediction accuracy for samples near decision boundaries. LFER is an adequate gear to perturb decision boundaries and has produced classifiers that beat state-of-the-art at the contrastive learning stage. Experiments show that LFER can produce an ensemble with accuracy comparable to the state-of-the-art yet have varied latent decision boundaries. It allows us to perform meaningful verification for samples near decision boundaries, encouraging the correct classification of near-boundary samples. By compounding the probability of correct prediction of a single sample amongst an ensemble of neural network trained, our method can improve upon a single classifier by denoising and affirming correct feature mappings.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源