Direct policy search: intrinsic vs. extrinsic perturbations

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

Reinforcement learning (RL) is a biological inspired learning
paradigm based on trial-and-error learning. A successful RL algorithm
has to balance exploration of new behavioral strategies and exploitation
of already obtained knowledge. In the initial learning phase exploration
is the dominant process. Exploration is realized by stochastic perturbations,
which can be applied at different levels. When considering direct
policy search in the space of neural network policies, exploration can be
applied on the synaptic level or on the level of neuronal activity. We
propose neuroevolution strategies (NeuroESs) for direct policy search in
RL. Learning using NeuroESs can be interpreted as modelling of extrinsic
perturbations on the level of synaptic weights. In contrast, policy
gradient methods (PGMs) can be regarded as intrinsic perturbation of
neuronal activity. We compare these two approaches conceptually and
experimentally.
Original languageEnglish
Title of host publicationWorkshop New Challenges in Neural Computation
EditorsB. Hammer, T. Villmann
Number of pages7
Publication date2010
Pages33-39
Publication statusPublished - 2010
Externally publishedYes
EventWorkshop New Challenges in Neural Computatation 2010 - Karlsruhe, Germany
Duration: 21 Sep 201021 Sep 2010

Conference

ConferenceWorkshop New Challenges in Neural Computatation 2010
LandGermany
ByKarlsruhe
Periode21/09/201021/09/2010
SeriesMachine Learning Reports
Volume04/2010
ISSN1865-3960

ID: 33863042