论文标题
跟随神经受扰的领导者进行对抗训练
Follow the Neurally-Perturbed Leader for Adversarial Training
论文作者
论文摘要
游戏理论学习模型是一组强大的模型,可优化多目标体系结构。其中包括零和架构,这些架构激发了对抗性学习框架。这些零-AM架构的一个重要缺点是基于梯度的训练会导致收敛和环状动态较弱。 我们提出了一项新颖的遵循领导者训练算法的零构建算法,该算法保证了无循环行为的混合NASH平衡的融合。这是一种特殊类型的遵循扰动的领导算法,其中扰动是神经介导剂的结果。 我们通过将此培训算法应用于具有凸和非凸损失的游戏以及生成的对抗体系结构来验证我们的理论结果。此外,我们自定义该算法的实现,以进行对抗模仿学习应用程序。在训练的每个步骤中,调解员代理都会用生成的代码进行观察。由于这些中介代码,所提出的算法也有效地在具有各种变化因素的环境中学习。我们通过使用程序生成的游戏环境以及合成数据来验证我们的断言。 GitHub实施可用。
Game-theoretic models of learning are a powerful set of models that optimize multi-objective architectures. Among these models are zero-sum architectures that have inspired adversarial learning frameworks. An important shortcoming of these zeros-sum architectures is that gradient-based training leads to weak convergence and cyclic dynamics. We propose a novel follow the leader training algorithm for zeros-sum architectures that guarantees convergence to mixed Nash equilibrium without cyclic behaviors. It is a special type of follow the perturbed leader algorithm where perturbations are the result of a neural mediating agent. We validate our theoretical results by applying this training algorithm to games with convex and non-convex loss as well as generative adversarial architectures. Moreover, we customize the implementation of this algorithm for adversarial imitation learning applications. At every step of the training, the mediator agent perturbs the observations with generated codes. As a result of these mediating codes, the proposed algorithm is also efficient for learning in environments with various factors of variations. We validate our assertion by using a procedurally generated game environment as well as synthetic data. Github implementation is available.