论文标题
学习使用各种信息瓶颈学习域概括
Learning to Learn with Variational Information Bottleneck for Domain Generalization
论文作者
论文摘要
域的概括模型学会将其推广到以前看不见的域,但遭受了预测不确定性和域的转移。在本文中,我们解决了这两个问题。我们引入了用于域概括的概率元学习模型,其中将跨域共享的分类器参数建模为分布。这可以更好地处理看不见的领域的预测不确定性。为了处理域的转移,我们通过元素变分信息瓶颈的拟议原则学习域不变表示,我们称为metavib。 Metavib通过利用域概括的元学习设置来源自相互信息的新变分界。通过情节训练,Metavib学会了逐渐狭窄的域间隙,以建立域不变表示,同时提高预测准确性。我们对三个基准测试进行实验,以实现跨域视觉识别。全面的消融研究验证了Metavib对域概括的好处。比较结果表明,我们的方法的表现始终优于先前的方法。
Domain generalization models learn to generalize to previously unseen domains, but suffer from prediction uncertainty and domain shift. In this paper, we address both problems. We introduce a probabilistic meta-learning model for domain generalization, in which classifier parameters shared across domains are modeled as distributions. This enables better handling of prediction uncertainty on unseen domains. To deal with domain shift, we learn domain-invariant representations by the proposed principle of meta variational information bottleneck, we call MetaVIB. MetaVIB is derived from novel variational bounds of mutual information, by leveraging the meta-learning setting of domain generalization. Through episodic training, MetaVIB learns to gradually narrow domain gaps to establish domain-invariant representations, while simultaneously maximizing prediction accuracy. We conduct experiments on three benchmarks for cross-domain visual recognition. Comprehensive ablation studies validate the benefits of MetaVIB for domain generalization. The comparison results demonstrate our method outperforms previous approaches consistently.