论文标题
通过自适应批量大小平衡速率和差异,以解决随机优化问题
Balancing Rates and Variance via Adaptive Batch-Size for Stochastic Optimization Problems
论文作者
论文摘要
随机梯度下降是解决随机优化问题的规范工具,并构成了现代机器学习和统计的基石。在这项工作中,我们试图平衡一个事实,即需要衰减的步进尺寸才能确切的渐近收敛性,而恒定的阶梯尺寸在有限的时间内更快地学习到错误。为此,我们提出了一种允许参数自适应发展的策略,而不是固定小批量和阶梯尺寸。具体而言,批处理大小设置为一个分段构恒定的序列,当满足合适的误差标准时,会增加。此外,选择了阶梯大小,因为它产生了最快的收敛性。总体算法是两个量表自适应(TSA)方案,是针对凸和非凸随机优化问题开发的。它继承了随机梯度法的确切渐近收敛性。更重要的是,从理论上实现了最佳误差降低率,以及计算成本的总体降低。在实验上,我们观察到,TSA相对于确定迷你批量和阶梯尺寸的标准SGD实现了一个有利的权衡,或者仅允许一个人分别增加或减少。
Stochastic gradient descent is a canonical tool for addressing stochastic optimization problems, and forms the bedrock of modern machine learning and statistics. In this work, we seek to balance the fact that attenuating step-size is required for exact asymptotic convergence with the fact that constant step-size learns faster in finite time up to an error. To do so, rather than fixing the mini-batch and the step-size at the outset, we propose a strategy to allow parameters to evolve adaptively. Specifically, the batch-size is set to be a piecewise-constant increasing sequence where the increase occurs when a suitable error criterion is satisfied. Moreover, the step-size is selected as that which yields the fastest convergence. The overall algorithm, two scale adaptive (TSA) scheme, is developed for both convex and non-convex stochastic optimization problems. It inherits the exact asymptotic convergence of stochastic gradient method. More importantly, the optimal error decreasing rate is achieved theoretically, as well as an overall reduction in computational cost. Experimentally, we observe that TSA attains a favorable tradeoff relative to standard SGD that fixes the mini-batch and the step-size, or simply allowing one to increase or decrease respectively.