论文标题

2020年盖子:从不完美的数据挑战结果中学习

LID 2020: The Learning from Imperfect Data Challenge Results

论文作者

Wei, Yunchao, Zheng, Shuai, Cheng, Ming-Ming, Zhao, Hang, Wang, Liwei, Ding, Errui, Yang, Yi, Torralba, Antonio, Liu, Ting, Sun, Guolei, Wang, Wenguan, Van Gool, Luc, Bae, Wonho, Noh, Junhyug, Seo, Jinhwan, Kim, Gunhee, Zhao, Hao, Lu, Ming, Yao, Anbang, Guo, Yiwen, Chen, Yurong, Zhang, Li, Tan, Chuangchuang, Ruan, Tao, Gu, Guanghua, Wei, Shikui, Zhao, Yao, Dobko, Mariia, Viniavskyi, Ostap, Dobosevych, Oles, Wang, Zhendong, Chen, Zhenyuan, Gong, Chen, Yan, Huanqing, He, Jun

论文摘要

在研究界从完美注释的数据集中学习,从不完美的数据中学习成为许多工业应用中的一个问题。从不完美的数据(LID)研讨会中学习的目的是激发和促进研究的研究,以开发新的方法,这些方法可以利用不完善的数据并提高培训期间的数据效率。如今,在多个Internet服务上可以使用大量的用户生成数据。如何利用这些并改善机器学习模型是一个高影响问题。我们与研讨会一起组织了挑战。这些挑战的目的是在弱监督的学习环境中找到对象检测,语义细分和场景解析的最新方法。挑战中有三个曲目,即弱监督的语义细分(轨道1),弱监督场景解析(轨道2)以及弱监督的对象定位(轨道3)。在基于ILSVRC DET的轨道1中,我们提供了来自200个类别的15K图像的像素级注释以进行评估。在轨道2中,我们为ADE20K的培训集提供了基于点的注释。在基于ILSVRC CLS-LOC的轨道3中,我们提供了44,271张图像的像素级注释以进行评估。此外,我们进一步介绍了\ cite {zhang2020Rethinking}提出的新评估度量,即iou曲线,以测量生成的对象定位图的质量。该技术报告总结了挑战的重点。挑战提交服务器和排行榜将继续为对其感兴趣的研究人员开放。有关挑战和基准测试的更多详细信息,请访问https://lidchallenge.github.io。

Learning from imperfect data becomes an issue in many industrial applications after the research community has made profound progress in supervised learning from perfectly annotated datasets. The purpose of the Learning from Imperfect Data (LID) workshop is to inspire and facilitate the research in developing novel approaches that would harness the imperfect data and improve the data-efficiency during training. A massive amount of user-generated data nowadays available on multiple internet services. How to leverage those and improve the machine learning models is a high impact problem. We organize the challenges in conjunction with the workshop. The goal of these challenges is to find the state-of-the-art approaches in the weakly supervised learning setting for object detection, semantic segmentation, and scene parsing. There are three tracks in the challenge, i.e., weakly supervised semantic segmentation (Track 1), weakly supervised scene parsing (Track 2), and weakly supervised object localization (Track 3). In Track 1, based on ILSVRC DET, we provide pixel-level annotations of 15K images from 200 categories for evaluation. In Track 2, we provide point-based annotations for the training set of ADE20K. In Track 3, based on ILSVRC CLS-LOC, we provide pixel-level annotations of 44,271 images for evaluation. Besides, we further introduce a new evaluation metric proposed by \cite{zhang2020rethinking}, i.e., IoU curve, to measure the quality of the generated object localization maps. This technical report summarizes the highlights from the challenge. The challenge submission server and the leaderboard will continue to open for the researchers who are interested in it. More details regarding the challenge and the benchmarks are available at https://lidchallenge.github.io

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源