论文标题
从几个样本中学习:调查
Learning from Few Samples: A Survey
论文作者
论文摘要
在某些情况下,深层神经网络已经能够超越人类,例如图像识别和图像分类。但是,随着各种新型类别的出现,能够从有限样本中不断扩大此类网络的学习能力的能力仍然是一个挑战。诸如元学习和/或几次学习学习之类的技术显示出令人鼓舞的结果,可以根据先验知识学习或推广到新颖的类别/任务。在本文中,我们根据其方法和评估指标对计算机视觉域中现有的少量元学习技术进行了研究。我们为这些技术提供了分类法,并将其归类为基于数据的嵌入,嵌入,优化和基于语义的学习,以进行几次射击,一次性和零弹性设置。然后,我们描述每个类别中完成的开创性工作,并讨论他们从几个样本中学习困境的方法。最后,我们在常用的基准数据集上对这些技术进行了比较:Omniglot和Miniimagenet,以及针对未来提高这些技术迈向超过人类最终目标的未来方向的讨论。
Deep neural networks have been able to outperform humans in some cases like image recognition and image classification. However, with the emergence of various novel categories, the ability to continuously widen the learning capability of such networks from limited samples, still remains a challenge. Techniques like Meta-Learning and/or few-shot learning showed promising results, where they can learn or generalize to a novel category/task based on prior knowledge. In this paper, we perform a study of the existing few-shot meta-learning techniques in the computer vision domain based on their method and evaluation metrics. We provide a taxonomy for the techniques and categorize them as data-augmentation, embedding, optimization and semantics based learning for few-shot, one-shot and zero-shot settings. We then describe the seminal work done in each category and discuss their approach towards solving the predicament of learning from few samples. Lastly we provide a comparison of these techniques on the commonly used benchmark datasets: Omniglot, and MiniImagenet, along with a discussion towards the future direction of improving the performance of these techniques towards the final goal of outperforming humans.