Direct policy search: intrinsic vs. extrinsic perturbations
Research output: Chapter in Book/Report/Conference proceeding › Article in proceedings › Research › peer-review
Reinforcement learning (RL) is a biological inspired learning
paradigm based on trial-and-error learning. A successful RL algorithm
has to balance exploration of new behavioral strategies and exploitation
of already obtained knowledge. In the initial learning phase exploration
is the dominant process. Exploration is realized by stochastic perturbations,
which can be applied at different levels. When considering direct
policy search in the space of neural network policies, exploration can be
applied on the synaptic level or on the level of neuronal activity. We
propose neuroevolution strategies (NeuroESs) for direct policy search in
RL. Learning using NeuroESs can be interpreted as modelling of extrinsic
perturbations on the level of synaptic weights. In contrast, policy
gradient methods (PGMs) can be regarded as intrinsic perturbation of
neuronal activity. We compare these two approaches conceptually and
experimentally.
paradigm based on trial-and-error learning. A successful RL algorithm
has to balance exploration of new behavioral strategies and exploitation
of already obtained knowledge. In the initial learning phase exploration
is the dominant process. Exploration is realized by stochastic perturbations,
which can be applied at different levels. When considering direct
policy search in the space of neural network policies, exploration can be
applied on the synaptic level or on the level of neuronal activity. We
propose neuroevolution strategies (NeuroESs) for direct policy search in
RL. Learning using NeuroESs can be interpreted as modelling of extrinsic
perturbations on the level of synaptic weights. In contrast, policy
gradient methods (PGMs) can be regarded as intrinsic perturbation of
neuronal activity. We compare these two approaches conceptually and
experimentally.
Original language | English |
---|---|
Title of host publication | Workshop New Challenges in Neural Computation |
Editors | B. Hammer, T. Villmann |
Number of pages | 7 |
Publication date | 2010 |
Pages | 33-39 |
Publication status | Published - 2010 |
Externally published | Yes |
Event | Workshop New Challenges in Neural Computatation 2010 - Karlsruhe, Germany Duration: 21 Sep 2010 → 21 Sep 2010 |
Conference
Conference | Workshop New Challenges in Neural Computatation 2010 |
---|---|
Land | Germany |
By | Karlsruhe |
Periode | 21/09/2010 → 21/09/2010 |
Series | Machine Learning Reports |
---|---|
Volume | 04/2010 |
ISSN | 1865-3960 |
Links
- https://www.techfak.uni-bielefeld.de/~fschleif/mlr/mlr_04_2010.pdf
Final published version
ID: 33863042