Adaptive evolution strategy with ensemble of mutations for Reinforcement Learning

Research output: Contribution to journalArticlepeer-review

24 Scopus citations

Abstract

Evolving the weights of learning networks through evolutionary computation (neuroevolution) has proven scalable over a range of challenging Reinforcement Learning (RL) control tasks. However, similar to most black-box optimization problems, existing neuroevolution approaches require an additional adaptation process to effectively balance exploration and exploitation through the selection of sensitive hyper-parameters throughout the evolution process. Therefore, these methods are often plagued by the computation complexities of such adaptation processes which often rely on a number of sophisticatedly formulated strategy parameters. In this paper, Evolution Strategy (ES) with a simple yet efficient ensemble of mutation strategies is proposed. Specifically, two distinct mutation strategies coexist throughout the evolution process where each strategy is associated with its own population subset. Consequently, elites for generating a population of offspring are realized by co-evaluation of the combined population. Experiments on testbed of six (6) black-box optimization problems which are generated using a classical control problem and six (6) proven continuous RL agents demonstrate the efficiency of the proposed method in terms of faster convergence and scalability than the canonical ES. Furthermore, the proposed Adaptive Ensemble ES (AEES) shows an average of 5 - 10000x and 10 - 100x better sample complexity in low and high dimension problems, respectively than their associated base DRL agents.

Original languageEnglish
Article number108624
JournalKnowledge-Based Systems
Volume245
DOIs
StatePublished - 7 Jun 2022

Keywords

  • Black-box optimization
  • Ensemble
  • Evolution strategy
  • Mutation strategy
  • Reinforcement Learning

Fingerprint

Dive into the research topics of 'Adaptive evolution strategy with ensemble of mutations for Reinforcement Learning'. Together they form a unique fingerprint.

Cite this