论文标题

比较学习视觉上下文

Learning Visual Context by Comparison

论文作者

Kim, Minchul, Park, Jongchan, Na, Seil, Park, Chang Min, Yoo, Donggeun

论文摘要

从X射线图像中查找疾病是一项重要而又高度挑战的任务。解决此任务的当前方法利用了胸部X射线图像的各种特征,但最重要的特征之一仍然缺少:需要比较图像中相关区域之间的必要性。在本文中,我们介绍了参加和兼容模块(ACM),以捕获感兴趣对象及其相应上下文之间的差异。我们表明,明确的差异建模在需要直接比较远处的位置之间的任务中非常有帮助。该模块可以插入现有的深度学习模型中。为了进行评估,我们将模块应用于三个胸部X射线识别任务和可可对象检测和分割任务,并观察到跨任务的一致改进。该代码可在https://github.com/mk-minchul/attend-and-compare上找到。

Finding diseases from an X-ray image is an important yet highly challenging task. Current methods for solving this task exploit various characteristics of the chest X-ray image, but one of the most important characteristics is still missing: the necessity of comparison between related regions in an image. In this paper, we present Attend-and-Compare Module (ACM) for capturing the difference between an object of interest and its corresponding context. We show that explicit difference modeling can be very helpful in tasks that require direct comparison between locations from afar. This module can be plugged into existing deep learning models. For evaluation, we apply our module to three chest X-ray recognition tasks and COCO object detection & segmentation tasks and observe consistent improvements across tasks. The code is available at https://github.com/mk-minchul/attend-and-compare.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源