论文标题

Saute RL:几乎可以肯定的是使用州扩展的安全加固学习

Saute RL: Almost Surely Safe Reinforcement Learning Using State Augmentation

论文作者

Sootla, Aivar, Cowen-Rivers, Alexander I., Jafferjee, Taher, Wang, Ziyan, Mguni, David, Wang, Jun, Bou-Ammar, Haitham

论文摘要

几乎可以肯定的是(或概率一个)满足安全性限制对于在现实生活应用中部署强化学习(RL)至关重要。例如,理想情况下,平面降落和起飞应以概率为单位发生。我们通过引入安全增强(SAUTE)马尔可夫决策过程(MDP)来解决该问题,在该过程中,通过将其扩大到州空间并重塑目标来消除安全限制。我们表明,Saute MDP满足了Bellman方程,并使我们更靠近解决安全的RL,几乎可以肯定地满足约束。我们认为Saute MDP允许从不同的角度查看安全的RL问题,从而实现新功能。例如,我们的方法具有插件的性质,即任何RL算法都可以“炒”。此外,国家扩展允许跨安全限制进行政策概括。我们最终表明,当约束满意度非常重要时,SAUTE RL算法的表现可以胜过其最先进的对应。

Satisfying safety constraints almost surely (or with probability one) can be critical for the deployment of Reinforcement Learning (RL) in real-life applications. For example, plane landing and take-off should ideally occur with probability one. We address the problem by introducing Safety Augmented (Saute) Markov Decision Processes (MDPs), where the safety constraints are eliminated by augmenting them into the state-space and reshaping the objective. We show that Saute MDP satisfies the Bellman equation and moves us closer to solving Safe RL with constraints satisfied almost surely. We argue that Saute MDP allows viewing the Safe RL problem from a different perspective enabling new features. For instance, our approach has a plug-and-play nature, i.e., any RL algorithm can be "Sauteed". Additionally, state augmentation allows for policy generalization across safety constraints. We finally show that Saute RL algorithms can outperform their state-of-the-art counterparts when constraint satisfaction is of high importance.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源