Improved Exploration in Factored Average-Reward MDPs

Publikation: Bidrag til bog/antologi/rapportKonferencebidrag i proceedingsForskningfagfællebedømt


  • Fulltext

    Forlagets udgivne version, 611 KB, PDF-dokument

We consider a regret minimization task under the average-reward criterion in an unknown Factored Markov Decision Process (FMDP). More specifically, we consider an FMDP where the state-action space XX and the state-space SS admit the respective factored forms of X=⊗ni=1XiX=⊗i=1nXi and S=⊗mi=1SiS=⊗i=1mSi, and the transition and reward functions are factored over XX and SS. Assuming a known a factorization structure, we introduce a novel regret minimization strategy inspired by the popular UCRL strategy, called DBN-UCRL, which relies on Bernstein-type confidence sets defined for individual elements of the transition function. We show that for a generic factorization structure, DBN-UCRL achieves a regret bound, whose leading term strictly improves over existing regret bounds in terms of the dependencies on the size of \cSi\cSi’s and the diameter. We further show that when the factorization structure corresponds to the Cartesian product of some base MDPs, the regret of DBN-UCRL is upper bounded by the sum of regret of the base MDPs. We demonstrate, through numerical experiments on standard environments, that DBN-UCRL enjoys a substantially improved regret empirically over existing algorithms that have frequentist regret guarantees.

TitelProceedings of the 24th International Conference on Artificial Intelligence and Statistics (AISTATS)
StatusUdgivet - 2021
Begivenhed24th International Conference on Artificial Intelligence and Statistics (AISTATS 2021) - San Diego, USA
Varighed: 13 apr. 202115 apr. 2021


Konference24th International Conference on Artificial Intelligence and Statistics (AISTATS 2021)
BySan Diego
NavnProceedings of Machine Learning Research


ID: 301365745