论文标题
深度度量学习的代理锚损失
Proxy Anchor Loss for Deep Metric Learning
论文作者
论文摘要
现有的度量学习损失可以分为两类:基于成对和基于代理的损失。前类可以利用数据点之间的细粒语义关系,但由于其高训练的复杂性,总体上会减慢收敛性。相比之下,后一类可以快速可靠地收敛,但不能考虑丰富的数据与数据关系。本文提出了一种新的基于代理的损失,该损失具有基于配对和代理的方法的优势,并克服了它们的局限性。由于使用了代理,我们的损失提高了融合速度,并且对嘈杂的标签和离群值非常强大。同时,它允许嵌入数据向量以其梯度相互交互,以利用数据与数据关系。我们的方法在四个公共基准测试中进行了评估,在该公共基准测试中,经过培训的标准网络实现了最先进的性能,并且最快地收敛。
Existing metric learning losses can be categorized into two classes: pair-based and proxy-based losses. The former class can leverage fine-grained semantic relations between data points, but slows convergence in general due to its high training complexity. In contrast, the latter class enables fast and reliable convergence, but cannot consider the rich data-to-data relations. This paper presents a new proxy-based loss that takes advantages of both pair- and proxy-based methods and overcomes their limitations. Thanks to the use of proxies, our loss boosts the speed of convergence and is robust against noisy labels and outliers. At the same time, it allows embedding vectors of data to interact with each other in its gradients to exploit data-to-data relations. Our method is evaluated on four public benchmarks, where a standard network trained with our loss achieves state-of-the-art performance and most quickly converges.