Large-scale noise-resilient evolution-strategies

Publikation: Bidrag til bog/antologi/rapportKonferencebidrag i proceedingsForskningfagfællebedømt

Standard

Large-scale noise-resilient evolution-strategies. / Krause, Oswin.

GECCO 2019 - Proceedings of the 2019 Genetic and Evolutionary Computation Conference. Association for Computing Machinery, 2019. s. 682-690.

Publikation: Bidrag til bog/antologi/rapportKonferencebidrag i proceedingsForskningfagfællebedømt

Harvard

Krause, O 2019, Large-scale noise-resilient evolution-strategies. i GECCO 2019 - Proceedings of the 2019 Genetic and Evolutionary Computation Conference. Association for Computing Machinery, s. 682-690, 2019 Genetic and Evolutionary Computation Conference, GECCO 2019, Prague, Tjekkiet, 13/07/2019. https://doi.org/10.1145/3321707.3321724

APA

Krause, O. (2019). Large-scale noise-resilient evolution-strategies. I GECCO 2019 - Proceedings of the 2019 Genetic and Evolutionary Computation Conference (s. 682-690). Association for Computing Machinery. https://doi.org/10.1145/3321707.3321724

Vancouver

Krause O. Large-scale noise-resilient evolution-strategies. I GECCO 2019 - Proceedings of the 2019 Genetic and Evolutionary Computation Conference. Association for Computing Machinery. 2019. s. 682-690 https://doi.org/10.1145/3321707.3321724

Author

Krause, Oswin. / Large-scale noise-resilient evolution-strategies. GECCO 2019 - Proceedings of the 2019 Genetic and Evolutionary Computation Conference. Association for Computing Machinery, 2019. s. 682-690

Bibtex

@inproceedings{03417840245449bba5d57e613e51b555,
title = "Large-scale noise-resilient evolution-strategies",
abstract = "Ranking-based Evolution Strategies (ES) are efficient algorithms for problems where gradient-information is not available or when the gradient is not informative. This makes ES interesting for Reinforcement-Learning (RL). However, in RL the high dimensionality of the search-space, as well as the noise of the simulations make direct adaptation of ES challenging. Noise makes ranking points difficult and a large budget of re-evaluations is needed to maintain a bounded error rate. In this work, the ranked weighting is replaced by a linear weighting function, which results in nearly unbiased stochastic gradient descent (SGD) on the manifold of probability distributions. The approach is theoretically analysed and the algorithm is adapted based on the results of the analysis. It is shown that in the limit of infinite dimensions, the algorithm becomes invariant to smooth monotonous transformations of the objective function. Further, drawing on the theory of SGD, an adaptation of the learning-rates based on the noise-level is proposed at the cost of a second evaluation for every sampled point. It is shown empirically that the proposed method improves on simple ES using Cumulative Step-size Adaptation and ranking. Further, it is shown that the proposed algorithm is more noise-resilient than a ranking-based approach.",
keywords = "CMA-ES, Evolution Strategies, Large-Scale, Reinforcement-Learning, Stochastic Optimization",
author = "Oswin Krause",
year = "2019",
doi = "10.1145/3321707.3321724",
language = "English",
pages = "682--690",
booktitle = "GECCO 2019 - Proceedings of the 2019 Genetic and Evolutionary Computation Conference",
publisher = "Association for Computing Machinery",
note = "2019 Genetic and Evolutionary Computation Conference, GECCO 2019 ; Conference date: 13-07-2019 Through 17-07-2019",

}

RIS

TY - GEN

T1 - Large-scale noise-resilient evolution-strategies

AU - Krause, Oswin

PY - 2019

Y1 - 2019

N2 - Ranking-based Evolution Strategies (ES) are efficient algorithms for problems where gradient-information is not available or when the gradient is not informative. This makes ES interesting for Reinforcement-Learning (RL). However, in RL the high dimensionality of the search-space, as well as the noise of the simulations make direct adaptation of ES challenging. Noise makes ranking points difficult and a large budget of re-evaluations is needed to maintain a bounded error rate. In this work, the ranked weighting is replaced by a linear weighting function, which results in nearly unbiased stochastic gradient descent (SGD) on the manifold of probability distributions. The approach is theoretically analysed and the algorithm is adapted based on the results of the analysis. It is shown that in the limit of infinite dimensions, the algorithm becomes invariant to smooth monotonous transformations of the objective function. Further, drawing on the theory of SGD, an adaptation of the learning-rates based on the noise-level is proposed at the cost of a second evaluation for every sampled point. It is shown empirically that the proposed method improves on simple ES using Cumulative Step-size Adaptation and ranking. Further, it is shown that the proposed algorithm is more noise-resilient than a ranking-based approach.

AB - Ranking-based Evolution Strategies (ES) are efficient algorithms for problems where gradient-information is not available or when the gradient is not informative. This makes ES interesting for Reinforcement-Learning (RL). However, in RL the high dimensionality of the search-space, as well as the noise of the simulations make direct adaptation of ES challenging. Noise makes ranking points difficult and a large budget of re-evaluations is needed to maintain a bounded error rate. In this work, the ranked weighting is replaced by a linear weighting function, which results in nearly unbiased stochastic gradient descent (SGD) on the manifold of probability distributions. The approach is theoretically analysed and the algorithm is adapted based on the results of the analysis. It is shown that in the limit of infinite dimensions, the algorithm becomes invariant to smooth monotonous transformations of the objective function. Further, drawing on the theory of SGD, an adaptation of the learning-rates based on the noise-level is proposed at the cost of a second evaluation for every sampled point. It is shown empirically that the proposed method improves on simple ES using Cumulative Step-size Adaptation and ranking. Further, it is shown that the proposed algorithm is more noise-resilient than a ranking-based approach.

KW - CMA-ES

KW - Evolution Strategies

KW - Large-Scale

KW - Reinforcement-Learning

KW - Stochastic Optimization

UR - http://www.scopus.com/inward/record.url?scp=85072315663&partnerID=8YFLogxK

U2 - 10.1145/3321707.3321724

DO - 10.1145/3321707.3321724

M3 - Article in proceedings

SP - 682

EP - 690

BT - GECCO 2019 - Proceedings of the 2019 Genetic and Evolutionary Computation Conference

PB - Association for Computing Machinery

T2 - 2019 Genetic and Evolutionary Computation Conference, GECCO 2019

Y2 - 13 July 2019 through 17 July 2019

ER -

ID: 230688035