论文标题
多游戏决策变压器
Multi-Game Decision Transformers
论文作者
论文摘要
人工智能领域的长期目标是一种从多样化的经验中学习高度能力,通才的代理人的方法。在视觉和语言的子场中,这在很大程度上是通过扩展基于变压器的模型并在大型,多样的数据集上训练它们来实现的。在这一进步的驱动下,我们研究了是否可以使用相同的策略来生产通才加强学习者。具体来说,我们表明,具有一组权重的单个基于变压器的模型纯粹的离线训练可以在接近人类的性能下同时玩一套高达46场Atari游戏的套件。经过适当的培训和评估时,我们发现语言和视觉保持中观察到的相同趋势,包括通过模型大小的表现缩放以及通过微调快速适应新游戏。我们比较了此多游戏设置中的几种方法,例如在线和离线RL方法和行为克隆,并发现我们的多游戏决策变压器模型提供了最佳的可扩展性和性能。我们发布了预训练的模型和代码,以鼓励朝这个方向进行进一步的研究。
A longstanding goal of the field of AI is a method for learning a highly capable, generalist agent from diverse experience. In the subfields of vision and language, this was largely achieved by scaling up transformer-based models and training them on large, diverse datasets. Motivated by this progress, we investigate whether the same strategy can be used to produce generalist reinforcement learning agents. Specifically, we show that a single transformer-based model - with a single set of weights - trained purely offline can play a suite of up to 46 Atari games simultaneously at close-to-human performance. When trained and evaluated appropriately, we find that the same trends observed in language and vision hold, including scaling of performance with model size and rapid adaptation to new games via fine-tuning. We compare several approaches in this multi-game setting, such as online and offline RL methods and behavioral cloning, and find that our Multi-Game Decision Transformer models offer the best scalability and performance. We release the pre-trained models and code to encourage further research in this direction.