论文标题

为什么混合改善了模型性能

Why Mixup Improves the Model Performance

论文作者

Kimura, Masanari

论文摘要

机器学习技术用于广泛的领域。但是,机器学习模型通常会遭受过度拟合的问题。已经提出了许多数据增强方法来解决此类问题,其中一种被称为Mixup。混音是最近提出的正则化程序,它线性插值随机训练示例。这种正则化方法在实验上效果很好,但其理论保证尚未充分讨论。在这项研究中,我们旨在发现为什么混音从统计学习理论的方面效果很好。

Machine learning techniques are used in a wide range of domains. However, machine learning models often suffer from the problem of over-fitting. Many data augmentation methods have been proposed to tackle such a problem, and one of them is called mixup. Mixup is a recently proposed regularization procedure, which linearly interpolates a random pair of training examples. This regularization method works very well experimentally, but its theoretical guarantee is not adequately discussed. In this study, we aim to discover why mixup works well from the aspect of the statistical learning theory.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源