论文标题

对元学习视觉回归任务重要的是什么?

What Matters For Meta-Learning Vision Regression Tasks?

论文作者

Gao, Ning, Ziesche, Hanna, Vien, Ngo Anh, Volpp, Michael, Neumann, Gerhard

论文摘要

元学习能够快速适应看不见的任务,因此在几次分类和功能回归中广泛使用。但是,在具有高维输入(例如图像)的回归任务上尚未得到很好的探索。本文做出了两个主要贡献,有助于了解这个几乎没有探索的领域。 \ emph {first},我们设计了两种新型的跨类别级别视觉回归任务,即对象发现并构成用于计算机视觉的元学习域中前所未有的复杂性的估计。为此,我们(i)(i)对这些任务进行了详尽的评估,以及(ii)定量分析最近元学习算法中常用的各种深度学习技术的效果,以增强概括能力:增强数据增强,域随机化,任务增强,任务增强和元指定。最后,我们(iii)提供了一些有关视觉回归任务的元学习算法的见解和实用建议。 \ emph {second},我们建议在条件神经过程(CNP)(CNP)中添加功能性对比度学习(FCL),并以端到端的方式进行训练。实验结果表明,由于损失函数的选择差以及元训练集太小,因此先前工作的结果具有误导性。具体来说,我们发现CNP在大多数任务上都表现出色,而无需微调。此外,我们观察到没有量身定制的设计而无需固定的幼稚任务增强。

Meta-learning is widely used in few-shot classification and function regression due to its ability to quickly adapt to unseen tasks. However, it has not yet been well explored on regression tasks with high dimensional inputs such as images. This paper makes two main contributions that help understand this barely explored area. \emph{First}, we design two new types of cross-category level vision regression tasks, namely object discovery and pose estimation of unprecedented complexity in the meta-learning domain for computer vision. To this end, we (i) exhaustively evaluate common meta-learning techniques on these tasks, and (ii) quantitatively analyze the effect of various deep learning techniques commonly used in recent meta-learning algorithms in order to strengthen the generalization capability: data augmentation, domain randomization, task augmentation and meta-regularization. Finally, we (iii) provide some insights and practical recommendations for training meta-learning algorithms on vision regression tasks. \emph{Second}, we propose the addition of functional contrastive learning (FCL) over the task representations in Conditional Neural Processes (CNPs) and train in an end-to-end fashion. The experimental results show that the results of prior work are misleading as a consequence of a poor choice of the loss function as well as too small meta-training sets. Specifically, we find that CNPs outperform MAML on most tasks without fine-tuning. Furthermore, we observe that naive task augmentation without a tailored design results in underfitting.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源