论文标题

任务不确定性损失减少了不对称多任务特征学习的负转移

Task Uncertainty Loss Reduce Negative Transfer in Asymmetric Multi-task Feature Learning

论文作者

da Silva, Rafael Peres, Suphavilai, Chayaporn, Nagarajan, Niranjan

论文摘要

多任务学习(MTL)经常用于基于有限的培训数据必须学习目标任务的设置,但是可以从相关的辅助任务中利用知识。尽管MTL可以相对于单任务学习(STL)提高任务绩效,但这些改进可以隐藏负转移(NT),在该过程中,STL可以为许多单个任务提供更好的性能。不对称的多任务特征学习(AMTFL)是一种方法,试图通过允许具有较高损失值的任务来解决此问题,从而对学习其他任务的特征表示影响较小。任务损失值不一定指示模型的特定任务可靠性。我们在两个正交数据集(图像识别和药物基因组学)中介绍了NT的示例,并通过使用Areatoric Holeatocor同型不确定性来捕获任务之间的相对信心,并设定任务损失的权重来应对这一挑战。我们的结果表明,这种方法减少了NT提供了一种新的方法来启用强大的MTL。

Multi-task learning (MTL) is frequently used in settings where a target task has to be learnt based on limited training data, but knowledge can be leveraged from related auxiliary tasks. While MTL can improve task performance overall relative to single-task learning (STL), these improvements can hide negative transfer (NT), where STL may deliver better performance for many individual tasks. Asymmetric multitask feature learning (AMTFL) is an approach that tries to address this by allowing tasks with higher loss values to have smaller influence on feature representations for learning other tasks. Task loss values do not necessarily indicate reliability of models for a specific task. We present examples of NT in two orthogonal datasets (image recognition and pharmacogenomics) and tackle this challenge by using aleatoric homoscedastic uncertainty to capture the relative confidence between tasks, and set weights for task loss. Our results show that this approach reduces NT providing a new approach to enable robust MTL.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源