论文标题

自适应风险最小化:学会适应域移动

Adaptive Risk Minimization: Learning to Adapt to Domain Shift

论文作者

Zhang, Marvin, Marklund, Henrik, Dhawan, Nikita, Gupta, Abhishek, Levine, Sergey, Finn, Chelsea

论文摘要

大多数机器学习算法的基本假设是培训和测试数据是从相同的基础分布中汲取的。但是,在几乎所有实际应用中都违反了此假设:由于时间相关性,非典型最终用户或其他因素,机器学习系统经常在分配转移下进行定期测试。在这项工作中,我们考虑了域概括的问题设定,其中训练数据被构成到域中,并且可能有多个测试时间转移,与新的域或域分布相对应。大多数先前的方法旨在学习一个在所有域上表现良好的稳健模型或不变特征空间。相比之下,我们旨在学习使用未标记的测试点适应测试时间转移域移动的模型。我们的主要贡献是介绍自适应风险最小化(ARM)的框架,在该框架中,模型被直接优化,以有效适应以通过学习适应训练领域来转移。与先前的鲁棒性,不变性和适应性方法相比,ARM方法在表现出域移位的许多图像分类问题上提供了1-4%的测试准确性的性能提高。

A fundamental assumption of most machine learning algorithms is that the training and test data are drawn from the same underlying distribution. However, this assumption is violated in almost all practical applications: machine learning systems are regularly tested under distribution shift, due to changing temporal correlations, atypical end users, or other factors. In this work, we consider the problem setting of domain generalization, where the training data are structured into domains and there may be multiple test time shifts, corresponding to new domains or domain distributions. Most prior methods aim to learn a single robust model or invariant feature space that performs well on all domains. In contrast, we aim to learn models that adapt at test time to domain shift using unlabeled test points. Our primary contribution is to introduce the framework of adaptive risk minimization (ARM), in which models are directly optimized for effective adaptation to shift by learning to adapt on the training domains. Compared to prior methods for robustness, invariance, and adaptation, ARM methods provide performance gains of 1-4% test accuracy on a number of image classification problems exhibiting domain shift.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源