论文标题

不再讨厌的超参数:RL的离线超参数调整

No More Pesky Hyperparameters: Offline Hyperparameter Tuning for RL

论文作者

Wang, Han, Sakhadeo, Archit, White, Adam, Bell, James, Liu, Vincent, Zhao, Xutong, Liu, Puer, Kozuno, Tadashi, Fyshe, Alona, White, Martha

论文摘要

加固学习(RL)剂的性能对选择超参数的选择敏感。但是,在机器人技术或工业控制系统等现实环境中,直接在环境上测试不同的超参数配置可能是经济上的过度,危险或耗时的。我们提出了一种新的方法来调整来自数据的离线日志,以完全指定在现实世界中在线学习的RL代理的超参数。该方法在概念上很简单:我们首先从离线数据中学习了环境模型,我们称之为校准模型,然后在校准模型中模拟学习以识别有希望的超参数。我们确定了使这项策略有效的几个标准,并开发出满足这些标准的方法。我们从经验上研究了各种设置中的方法,以识别何时有效以及何时失败。

The performance of reinforcement learning (RL) agents is sensitive to the choice of hyperparameters. In real-world settings like robotics or industrial control systems, however, testing different hyperparameter configurations directly on the environment can be financially prohibitive, dangerous, or time consuming. We propose a new approach to tune hyperparameters from offline logs of data, to fully specify the hyperparameters for an RL agent that learns online in the real world. The approach is conceptually simple: we first learn a model of the environment from the offline data, which we call a calibration model, and then simulate learning in the calibration model to identify promising hyperparameters. We identify several criteria to make this strategy effective, and develop an approach that satisfies these criteria. We empirically investigate the method in a variety of settings to identify when it is effective and when it fails.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源