论文标题

具有深度特征嵌入的元学习的基于保证金的传输边界

Margin-Based Transfer Bounds for Meta Learning with Deep Feature Embedding

论文作者

Guan, Jiechao, Lu, Zhiwu, Xiang, Tao, Hospedales, Timothy

论文摘要

通过转移从可见/以前的任务中学到的知识,元学习旨在概括地看不见/将来的任务。现有的元学习方法显示了各种多类分类问题的有希望的经验表现,但是很少有关于分类器对未来任务的概括能力的理论分析。在本文中,假设所有分类任务都是从相同的荟萃分布中取样的,我们利用了余量理论和统计学习理论来建立基于元学习的基于元学习的多类分类(MLMC)的三个基于边缘的传输界限。这些界限表明,可以通过在有限数量的先前任务上进行平均经验误差来估算给定分类算法的预期误差,这在一类预处理的特征图/深神经网络(即深度特征嵌入)上均匀地估算。为了验证这些界限,而不是常用的跨透明拷贝损失,使用了多修细胞损失来训练许多代表性的MLMC模型。三个基准测试的实验表明,这些基于保证金的模型仍然达到竞争性能,从而验证了我们基于利润的理论分析的实际价值。

By transferring knowledge learned from seen/previous tasks, meta learning aims to generalize well to unseen/future tasks. Existing meta-learning approaches have shown promising empirical performance on various multiclass classification problems, but few provide theoretical analysis on the classifiers' generalization ability on future tasks. In this paper, under the assumption that all classification tasks are sampled from the same meta-distribution, we leverage margin theory and statistical learning theory to establish three margin-based transfer bounds for meta-learning based multiclass classification (MLMC). These bounds reveal that the expected error of a given classification algorithm for a future task can be estimated with the average empirical error on a finite number of previous tasks, uniformly over a class of preprocessing feature maps/deep neural networks (i.e. deep feature embeddings). To validate these bounds, instead of the commonly-used cross-entropy loss, a multi-margin loss is employed to train a number of representative MLMC models. Experiments on three benchmarks show that these margin-based models still achieve competitive performance, validating the practical value of our margin-based theoretical analysis.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源