论文标题

我们在神经网络培训中取得了多少进展?用于基准优化器的新评估协议

How much progress have we made in neural network training? A New Evaluation Protocol for Benchmarking Optimizers

论文作者

Xiong, Yuanhao, Liu, Xuanqing, Lan, Li-Cheng, You, Yang, Si, Si, Hsieh, Cho-Jui

论文摘要

已经提出了许多优化器来训练深层神经网络,并且通常具有多个超参数,这使得他们的性能很难。在这项工作中,我们提出了一种新的基准测试协议,以评估端到端效率(从头开始训练模型而不知道最佳的超参数)和数据加成训练效率(以前选择的超参数用于通过新收集的数据定期重新训练该模型)。对于端到端效率,与以前的工作不同的是假定随机的超参数调谐(过度强调调音时间),我们建议通过强盗超参数调谐策略进行评估。进行了人类研究,以表明我们的评估协议比随机搜索更好地匹配人类调音行为。对于数据结合培训,我们提出了一种新的协议,用于评估数据转移的高参数敏感性。然后,我们将提出的基准测试框架应用于7个优化器和各种任务,包括计算机视觉,自然语言处理,强化学习和图形挖掘。我们的结果表明,在所有任务中都没有明显的赢家。

Many optimizers have been proposed for training deep neural networks, and they often have multiple hyperparameters, which make it tricky to benchmark their performance. In this work, we propose a new benchmarking protocol to evaluate both end-to-end efficiency (training a model from scratch without knowing the best hyperparameter) and data-addition training efficiency (the previously selected hyperparameters are used for periodically re-training the model with newly collected data). For end-to-end efficiency, unlike previous work that assumes random hyperparameter tuning, which over-emphasizes the tuning time, we propose to evaluate with a bandit hyperparameter tuning strategy. A human study is conducted to show that our evaluation protocol matches human tuning behavior better than the random search. For data-addition training, we propose a new protocol for assessing the hyperparameter sensitivity to data shift. We then apply the proposed benchmarking framework to 7 optimizers and various tasks, including computer vision, natural language processing, reinforcement learning, and graph mining. Our results show that there is no clear winner across all the tasks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源