论文标题
通过随机梯度阈值迅速显着引导的混合
Expeditious Saliency-guided Mix-up through Random Gradient Thresholding
论文作者
论文摘要
事实证明,混合训练方法可以有效提高深神经网络的概括能力。多年来,研究界将混合方法扩展到两个方向,并采取了广泛的努力来提高显着性指导的程序,但对任意路径的关注最少,使随机域未经探索。在本文中,受每个方向的优越品质的启发,我们介绍了一种在这两条路线交界处的新颖方法。通过结合随机性和显着性利用率的最佳要素,我们的方法平衡了速度,简单性和准确性。我们按照“随机混合”的概念将方法r-mix命名。我们证明了它在泛化,对对抗性攻击的较弱监督物体定位,校准和鲁棒性中的有效性。最后,为了解决是否存在更好的决策协议的问题,我们培训了一个强化学习代理,该方案根据分类器的绩效来决定混合策略,从而降低了对人类设计的目标和超参数调谐的依赖。广泛的实验进一步表明,该试剂能够在尖端的水平上执行,为全自动混合奠定了基础。我们的代码在[https://github.com/minhlong94/random-mixup]上发布。
Mix-up training approaches have proven to be effective in improving the generalization ability of Deep Neural Networks. Over the years, the research community expands mix-up methods into two directions, with extensive efforts to improve saliency-guided procedures but minimal focus on the arbitrary path, leaving the randomization domain unexplored. In this paper, inspired by the superior qualities of each direction over one another, we introduce a novel method that lies at the junction of the two routes. By combining the best elements of randomness and saliency utilization, our method balances speed, simplicity, and accuracy. We name our method R-Mix following the concept of "Random Mix-up". We demonstrate its effectiveness in generalization, weakly supervised object localization, calibration, and robustness to adversarial attacks. Finally, in order to address the question of whether there exists a better decision protocol, we train a Reinforcement Learning agent that decides the mix-up policies based on the classifier's performance, reducing dependency on human-designed objectives and hyperparameter tuning. Extensive experiments further show that the agent is capable of performing at the cutting-edge level, laying the foundation for a fully automatic mix-up. Our code is released at [https://github.com/minhlong94/Random-Mixup].