论文标题
仙人掌:连续的演员批评,具有轨迹优化 - 朝着全球最优性
CACTO: Continuous Actor-Critic with Trajectory Optimization -- Towards global optimality
论文作者
论文摘要
本文提出了一种新型算法,用于连续控制动态系统,该算法结合了单个框架中轨迹优化(TO)和增强学习(RL)。当应用于连续的非线性系统以最大程度地降低非凸成本函数时,该算法背后的动机是TO和RL的两个主要局限性。具体而言,当搜索未在接近“良好”的最低限度接近时,可能会陷入贫穷的本地最小值。另一方面,在处理连续状态和控制空间时,RL训练过程可能会过长且强烈依赖于勘探策略。因此,我们的算法通过对引导的RL策略搜索学习了“良好”控制策略,当用作初始猜测提供商时,轨迹优化过程不容易收敛到可怜的本地Optima。我们的方法已在几个具有不同动力学系统的避免障碍物(包括具有6D状态的汽车模型和3个连接平面机械手)的避免障碍物的问题上进行了验证。我们的结果表明,与深层确定性策略梯度(DDPG)和近端策略优化(PPO)RL算法相比,仙人掌在逃脱局部最小值方面的巨大能力。
This paper presents a novel algorithm for the continuous control of dynamical systems that combines Trajectory Optimization (TO) and Reinforcement Learning (RL) in a single framework. The motivations behind this algorithm are the two main limitations of TO and RL when applied to continuous nonlinear systems to minimize a non-convex cost function. Specifically, TO can get stuck in poor local minima when the search is not initialized close to a "good" minimum. On the other hand, when dealing with continuous state and control spaces, the RL training process may be excessively long and strongly dependent on the exploration strategy. Thus, our algorithm learns a "good" control policy via TO-guided RL policy search that, when used as initial guess provider for TO, makes the trajectory optimization process less prone to converge to poor local optima. Our method is validated on several reaching problems featuring non-convex obstacle avoidance with different dynamical systems, including a car model with 6D state, and a 3-joint planar manipulator. Our results show the great capabilities of CACTO in escaping local minima, while being more computationally efficient than the Deep Deterministic Policy Gradient (DDPG) and Proximal Policy Optimization (PPO) RL algorithms.