论文标题

确定性MDP的近乎实例 - 最佳的PAC增强学习

Near Instance-Optimal PAC Reinforcement Learning for Deterministic MDPs

论文作者

Tirinzoni, Andrea, Al-Marjani, Aymen, Kaufmann, Emilie

论文摘要

在大约正确的(PAC)强化学习(RL)中,需要代理来确定$ε$ - 最佳政策,概率为$ 1-δ$。尽管为此问题存在最小值最佳算法,但其实例依赖性复杂性在情节马尔可夫决策过程(MDP)中仍然难以捉摸。在本文中,我们提出了第一个几乎匹配的(最多达到地平线平方因子和对数项)在具有有限状态和动作空间的确定性情节MDP中的PAC RL样品复杂性上的上限和下限。特别是,我们的界限为国家行动对的新概念构成了我们称为确定性返回差距的新概念。虽然我们与实例有关的下限作为线性程序编写,但我们的算法非常简单,并且在学习过程中不需要解决这样的优化问题。他们的设计和分析采用了新思想,包括图理论概念(最小流)和新的最大覆盖探索策略。

In probably approximately correct (PAC) reinforcement learning (RL), an agent is required to identify an $ε$-optimal policy with probability $1-δ$. While minimax optimal algorithms exist for this problem, its instance-dependent complexity remains elusive in episodic Markov decision processes (MDPs). In this paper, we propose the first nearly matching (up to a horizon squared factor and logarithmic terms) upper and lower bounds on the sample complexity of PAC RL in deterministic episodic MDPs with finite state and action spaces. In particular, our bounds feature a new notion of sub-optimality gap for state-action pairs that we call the deterministic return gap. While our instance-dependent lower bound is written as a linear program, our algorithms are very simple and do not require solving such an optimization problem during learning. Their design and analyses employ novel ideas, including graph-theoretical concepts (minimum flows) and a new maximum-coverage exploration strategy.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源