论文标题
通过隐式分布表示标签分布学习
Label Distribution Learning via Implicit Distribution Representation
论文作者
论文摘要
与多标签学习相反,标签分布学习通过标签分布来表征示例的多义,以代表更丰富的语义。在标签分布的学习过程中,培训数据主要是通过手动注释或标签增强算法来生成标签分布的。不幸的是,手动注释任务的复杂性或标签增强算法的不准确性导致标签分布训练集中的噪音和不确定性。为了减轻此问题,我们在标签分布学习框架中介绍了隐式分布,以表征每个标签值的不确定性。具体而言,我们使用深层隐式表示学习来构建具有高斯先验约束的标签分布矩阵,其中每个行组件对应于每个标签值的分布估计,并且该行组件受到先前的高斯分布的约束,以减轻标签分布数据集的噪声和不确定性干扰。最后,使用自我注意力算法将标签分布矩阵的每个行分量转换为标准标签分布形式。此外,在训练阶段进行了一些具有正则特征的方法,以提高模型的性能。
In contrast to multi-label learning, label distribution learning characterizes the polysemy of examples by a label distribution to represent richer semantics. In the learning process of label distribution, the training data is collected mainly by manual annotation or label enhancement algorithms to generate label distribution. Unfortunately, the complexity of the manual annotation task or the inaccuracy of the label enhancement algorithm leads to noise and uncertainty in the label distribution training set. To alleviate this problem, we introduce the implicit distribution in the label distribution learning framework to characterize the uncertainty of each label value. Specifically, we use deep implicit representation learning to construct a label distribution matrix with Gaussian prior constraints, where each row component corresponds to the distribution estimate of each label value, and this row component is constrained by a prior Gaussian distribution to moderate the noise and uncertainty interference of the label distribution dataset. Finally, each row component of the label distribution matrix is transformed into a standard label distribution form by using the self-attention algorithm. In addition, some approaches with regularization characteristics are conducted in the training phase to improve the performance of the model.