论文标题
多域鲁棒语义分割的实证研究
An Empirical Study on Multi-Domain Robust Semantic Segmentation
论文作者
论文摘要
如何有效利用大量现有数据集来训练健壮和高性能模型对于许多实际应用至关重要。但是,在不同数据集的幼稚合并中训练的模型往往会因注释冲突和领域差异而获得较差的性能。在本文中,我们试图训练一个统一的模型,该模型有望在几个受欢迎程度细分数据集中跨域上跨越跨越范围的模型。我们对能力的三个方面进行了对数据的详细分析,以供培训策略的三个方面进行分析。在RVC 2022语义分段任务上,我们的解决方案在跨域中改善模型的概括,数据集的第一个模型仅1/3大小。
How to effectively leverage the plentiful existing datasets to train a robust and high-performance model is of great significance for many practical applications. However, a model trained on a naive merge of different datasets tends to obtain poor performance due to annotation conflicts and domain divergence.In this paper, we attempt to train a unified model that is expected to perform well across domains on several popularity segmentation datasets.We conduct a detailed analysis of the impact on model generalization from three aspects of data augmentation, training strategies, and model capacity.Based on the analysis, we propose a robust solution that is able to improve model generalization across domains.Our solution ranks 2nd on RVC 2022 semantic segmentation task, with a dataset only 1/3 size of the 1st model used.