TY - JOUR
T1 - Adaptive evolution strategy with ensemble of mutations for Reinforcement Learning
AU - Ajani, Oladayo S.
AU - Mallipeddi, Rammohan
N1 - Publisher Copyright:
© 2022 Elsevier B.V.
PY - 2022/6/7
Y1 - 2022/6/7
N2 - Evolving the weights of learning networks through evolutionary computation (neuroevolution) has proven scalable over a range of challenging Reinforcement Learning (RL) control tasks. However, similar to most black-box optimization problems, existing neuroevolution approaches require an additional adaptation process to effectively balance exploration and exploitation through the selection of sensitive hyper-parameters throughout the evolution process. Therefore, these methods are often plagued by the computation complexities of such adaptation processes which often rely on a number of sophisticatedly formulated strategy parameters. In this paper, Evolution Strategy (ES) with a simple yet efficient ensemble of mutation strategies is proposed. Specifically, two distinct mutation strategies coexist throughout the evolution process where each strategy is associated with its own population subset. Consequently, elites for generating a population of offspring are realized by co-evaluation of the combined population. Experiments on testbed of six (6) black-box optimization problems which are generated using a classical control problem and six (6) proven continuous RL agents demonstrate the efficiency of the proposed method in terms of faster convergence and scalability than the canonical ES. Furthermore, the proposed Adaptive Ensemble ES (AEES) shows an average of 5 - 10000x and 10 - 100x better sample complexity in low and high dimension problems, respectively than their associated base DRL agents.
AB - Evolving the weights of learning networks through evolutionary computation (neuroevolution) has proven scalable over a range of challenging Reinforcement Learning (RL) control tasks. However, similar to most black-box optimization problems, existing neuroevolution approaches require an additional adaptation process to effectively balance exploration and exploitation through the selection of sensitive hyper-parameters throughout the evolution process. Therefore, these methods are often plagued by the computation complexities of such adaptation processes which often rely on a number of sophisticatedly formulated strategy parameters. In this paper, Evolution Strategy (ES) with a simple yet efficient ensemble of mutation strategies is proposed. Specifically, two distinct mutation strategies coexist throughout the evolution process where each strategy is associated with its own population subset. Consequently, elites for generating a population of offspring are realized by co-evaluation of the combined population. Experiments on testbed of six (6) black-box optimization problems which are generated using a classical control problem and six (6) proven continuous RL agents demonstrate the efficiency of the proposed method in terms of faster convergence and scalability than the canonical ES. Furthermore, the proposed Adaptive Ensemble ES (AEES) shows an average of 5 - 10000x and 10 - 100x better sample complexity in low and high dimension problems, respectively than their associated base DRL agents.
KW - Black-box optimization
KW - Ensemble
KW - Evolution strategy
KW - Mutation strategy
KW - Reinforcement Learning
UR - https://www.scopus.com/pages/publications/85127466821
U2 - 10.1016/j.knosys.2022.108624
DO - 10.1016/j.knosys.2022.108624
M3 - Article
AN - SCOPUS:85127466821
SN - 0950-7051
VL - 245
JO - Knowledge-Based Systems
JF - Knowledge-Based Systems
M1 - 108624
ER -