Direct policy search: intrinsic vs. extrinsic perturbations

Publikation: Bidrag til bog/antologi/rapportKonferencebidrag i proceedingsForskningfagfællebedømt

Reinforcement learning (RL) is a biological inspired learning
paradigm based on trial-and-error learning. A successful RL algorithm
has to balance exploration of new behavioral strategies and exploitation
of already obtained knowledge. In the initial learning phase exploration
is the dominant process. Exploration is realized by stochastic perturbations,
which can be applied at different levels. When considering direct
policy search in the space of neural network policies, exploration can be
applied on the synaptic level or on the level of neuronal activity. We
propose neuroevolution strategies (NeuroESs) for direct policy search in
RL. Learning using NeuroESs can be interpreted as modelling of extrinsic
perturbations on the level of synaptic weights. In contrast, policy
gradient methods (PGMs) can be regarded as intrinsic perturbation of
neuronal activity. We compare these two approaches conceptually and
experimentally.
OriginalsprogEngelsk
TitelWorkshop New Challenges in Neural Computation
RedaktørerB. Hammer, T. Villmann
Antal sider7
Publikationsdato2010
Sider33-39
StatusUdgivet - 2010
Eksternt udgivetJa
BegivenhedWorkshop New Challenges in Neural Computatation 2010 - Karlsruhe, Tyskland
Varighed: 21 sep. 201021 sep. 2010

Konference

KonferenceWorkshop New Challenges in Neural Computatation 2010
LandTyskland
ByKarlsruhe
Periode21/09/201021/09/2010
NavnMachine Learning Reports
Vol/bind04/2010
ISSN1865-3960

ID: 33863042