论文标题

ZLPR:多标签分类的新型损失

ZLPR: A Novel Loss for Multi-label Classification

论文作者

Su, Jianlin, Zhu, Mingren, Murtadha, Ahmed, Pan, Shengfeng, Wen, Bo, Liu, Yunfeng

论文摘要

在深度学习时代,损失功能决定了模型和算法可用的任务范围。为了支持深度学习在多标签分类(MLC)任务中的应用,我们建议在本文中提出ZLPR(基于零结合的log-sum-exp \&成对级别)损失。与MLC的其他基于等级的损失相比,ZLPR可以治疗目标标签数量不确定的问题,在这种角度,这使其与MLC经常使用的其他两种策略同样能够,即二进制相关性(BR)和标签Powerset(LP)。此外,ZLPR考虑了标签之间的加成,这使其比BR方法更全面。就计算复杂性而言,ZLPR可以与BR方法竞争,因为其预测也与标签无关,这使得与LP方法相比,时间和内存所需的时间和内存少。我们的实验证明了ZLPR对多个基准数据集和多个评估指标的有效性。此外,我们提出了ZLPR的软版本和相应的KL-Divergency计算方法,这使得可以应用一些正则化技巧,例如标签平滑,以增强模型的概括。

In the era of deep learning, loss functions determine the range of tasks available to models and algorithms. To support the application of deep learning in multi-label classification (MLC) tasks, we propose the ZLPR (zero-bounded log-sum-exp \& pairwise rank-based) loss in this paper. Compared to other rank-based losses for MLC, ZLPR can handel problems that the number of target labels is uncertain, which, in this point of view, makes it equally capable with the other two strategies often used in MLC, namely the binary relevance (BR) and the label powerset (LP). Additionally, ZLPR takes the corelation between labels into consideration, which makes it more comprehensive than the BR methods. In terms of computational complexity, ZLPR can compete with the BR methods because its prediction is also label-independent, which makes it take less time and memory than the LP methods. Our experiments demonstrate the effectiveness of ZLPR on multiple benchmark datasets and multiple evaluation metrics. Moreover, we propose the soft version and the corresponding KL-divergency calculation method of ZLPR, which makes it possible to apply some regularization tricks such as label smoothing to enhance the generalization of models.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源