论文标题
TRBOOST:基于信任区域方法的通用梯度提升机
TRBoost: A Generic Gradient Boosting Machine based on Trust-region Method
论文作者
论文摘要
通过利用泰勒在功能空间中的扩展,梯度提升机(GBM)在解决各种问题方面取得了显着成功。但是,在绩效和一般性之间达到平衡,对GBM构成了挑战。特别是,基于梯度下降的GBM采用一阶泰勒扩展来确保对所有损失功能的适用性,而牛顿基于方法的GBMS则使用正面的Hessian信息来实现卓越的性能,以牺牲一般性为代价。为了解决这个问题,本研究提出了一种称为Trust-Region Boosting(TRBoost)的新的通用梯度提升机。在每次迭代中,TRBOOST都使用受约束的二次模型近似目标,并应用信任区域算法来解决并获得新的学习者。与牛顿的基于方法的GBM不同,TRBOOST不需要Hessian是正定的,因此可以将其应用于任意损失功能,同时仍保持类似于二阶算法的竞争性能。这项研究中进行的收敛分析和数值实验证实,与二阶GBM相比,TRBOOST与一阶GBM一样一般,并且产生竞争结果。总体而言,Trboost是一种有前途的方法,可以平衡性能和一般性,使其成为机器学习从业者的工具包的宝贵补充。
Gradient Boosting Machines (GBMs) have demonstrated remarkable success in solving diverse problems by utilizing Taylor expansions in functional space. However, achieving a balance between performance and generality has posed a challenge for GBMs. In particular, gradient descent-based GBMs employ the first-order Taylor expansion to ensure applicability to all loss functions, while Newton's method-based GBMs use positive Hessian information to achieve superior performance at the expense of generality. To address this issue, this study proposes a new generic Gradient Boosting Machine called Trust-region Boosting (TRBoost). In each iteration, TRBoost uses a constrained quadratic model to approximate the objective and applies the Trust-region algorithm to solve it and obtain a new learner. Unlike Newton's method-based GBMs, TRBoost does not require the Hessian to be positive definite, thereby allowing it to be applied to arbitrary loss functions while still maintaining competitive performance similar to second-order algorithms. The convergence analysis and numerical experiments conducted in this study confirm that TRBoost is as general as first-order GBMs and yields competitive results compared to second-order GBMs. Overall, TRBoost is a promising approach that balances performance and generality, making it a valuable addition to the toolkit of machine learning practitioners.