Tightening Exploration in Upper Confidence Reinforcement Learning

Publikation: Bidrag til bog/antologi/rapportKonferencebidrag i proceedingsForskningfagfællebedømt

Standard

Tightening Exploration in Upper Confidence Reinforcement Learning. / Bourel, Hippolyte ; Maillard, Odalric ; Talebi, Sadegh.

Proceedings of the 37th International Conference on Machine Learning. PMLR, 2020. s. 1056-1066 (Proceedings of Machine Learning Research, Bind 119).

Publikation: Bidrag til bog/antologi/rapportKonferencebidrag i proceedingsForskningfagfællebedømt

Harvard

Bourel, H, Maillard, O & Talebi, S 2020, Tightening Exploration in Upper Confidence Reinforcement Learning. i Proceedings of the 37th International Conference on Machine Learning. PMLR, Proceedings of Machine Learning Research, bind 119, s. 1056-1066, 37th International Conference on Machine Learning - ICML 2020, Vienna, Østrig, 12/06/2020.

APA

Bourel, H., Maillard, O., & Talebi, S. (2020). Tightening Exploration in Upper Confidence Reinforcement Learning. I Proceedings of the 37th International Conference on Machine Learning (s. 1056-1066). PMLR. Proceedings of Machine Learning Research Bind 119

Vancouver

Bourel H, Maillard O, Talebi S. Tightening Exploration in Upper Confidence Reinforcement Learning. I Proceedings of the 37th International Conference on Machine Learning. PMLR. 2020. s. 1056-1066. (Proceedings of Machine Learning Research, Bind 119).

Author

Bourel, Hippolyte ; Maillard, Odalric ; Talebi, Sadegh. / Tightening Exploration in Upper Confidence Reinforcement Learning. Proceedings of the 37th International Conference on Machine Learning. PMLR, 2020. s. 1056-1066 (Proceedings of Machine Learning Research, Bind 119).

Bibtex

@inproceedings{8d86470c7c9e4fc593f8b916a619a10c,
title = "Tightening Exploration in Upper Confidence Reinforcement Learning",
abstract = "The upper confidence reinforcement learning (UCRL2) algorithm introduced in \citep{jaksch2010near} is a popular method to perform regret minimization in unknown discrete Markov Decision Processes under the average-reward criterion. Despite its nice and generic theoretical regret guarantees, this algorithm and its variants have remained until now mostly theoretical as numerical experiments in simple environments exhibit long burn-in phases before the learning takes place. In pursuit of practical efficiency, we present UCRL3, following the lines of UCRL2, but with two key modifications: First, it uses state-of-the-art time-uniform concentration inequalities to compute confidence sets on the reward and (component-wise) transition distributions for each state-action pair. Furthermore, to tighten exploration, it uses an adaptive computation of the support of each transition distribution, which in turn enables us to revisit the extended value iteration procedure of UCRL2 to optimize over distributions with reduced support by disregarding low probability transitions, while still ensuring near-optimism. We demonstrate, through numerical experiments in standard environments, that reducing exploration this way yields a substantial numerical improvement compared to UCRL2 and its variants. On the theoretical side, these key modifications enable us to derive a regret bound for UCRL3 improving on UCRL2, that for the first time makes appear notions of local diameter and local effective support, thanks to variance-aware concentration bounds.",
author = "Hippolyte Bourel and Odalric Maillard and Sadegh Talebi",
year = "2020",
language = "English",
series = "Proceedings of Machine Learning Research",
pages = "1056--1066",
booktitle = "Proceedings of the 37th International Conference on Machine Learning",
publisher = "PMLR",
note = "37th International Conference on Machine Learning - ICML 2020 ; Conference date: 12-06-2020 Through 18-06-2020",

}

RIS

TY - GEN

T1 - Tightening Exploration in Upper Confidence Reinforcement Learning

AU - Bourel, Hippolyte

AU - Maillard, Odalric

AU - Talebi, Sadegh

PY - 2020

Y1 - 2020

N2 - The upper confidence reinforcement learning (UCRL2) algorithm introduced in \citep{jaksch2010near} is a popular method to perform regret minimization in unknown discrete Markov Decision Processes under the average-reward criterion. Despite its nice and generic theoretical regret guarantees, this algorithm and its variants have remained until now mostly theoretical as numerical experiments in simple environments exhibit long burn-in phases before the learning takes place. In pursuit of practical efficiency, we present UCRL3, following the lines of UCRL2, but with two key modifications: First, it uses state-of-the-art time-uniform concentration inequalities to compute confidence sets on the reward and (component-wise) transition distributions for each state-action pair. Furthermore, to tighten exploration, it uses an adaptive computation of the support of each transition distribution, which in turn enables us to revisit the extended value iteration procedure of UCRL2 to optimize over distributions with reduced support by disregarding low probability transitions, while still ensuring near-optimism. We demonstrate, through numerical experiments in standard environments, that reducing exploration this way yields a substantial numerical improvement compared to UCRL2 and its variants. On the theoretical side, these key modifications enable us to derive a regret bound for UCRL3 improving on UCRL2, that for the first time makes appear notions of local diameter and local effective support, thanks to variance-aware concentration bounds.

AB - The upper confidence reinforcement learning (UCRL2) algorithm introduced in \citep{jaksch2010near} is a popular method to perform regret minimization in unknown discrete Markov Decision Processes under the average-reward criterion. Despite its nice and generic theoretical regret guarantees, this algorithm and its variants have remained until now mostly theoretical as numerical experiments in simple environments exhibit long burn-in phases before the learning takes place. In pursuit of practical efficiency, we present UCRL3, following the lines of UCRL2, but with two key modifications: First, it uses state-of-the-art time-uniform concentration inequalities to compute confidence sets on the reward and (component-wise) transition distributions for each state-action pair. Furthermore, to tighten exploration, it uses an adaptive computation of the support of each transition distribution, which in turn enables us to revisit the extended value iteration procedure of UCRL2 to optimize over distributions with reduced support by disregarding low probability transitions, while still ensuring near-optimism. We demonstrate, through numerical experiments in standard environments, that reducing exploration this way yields a substantial numerical improvement compared to UCRL2 and its variants. On the theoretical side, these key modifications enable us to derive a regret bound for UCRL3 improving on UCRL2, that for the first time makes appear notions of local diameter and local effective support, thanks to variance-aware concentration bounds.

M3 - Article in proceedings

T3 - Proceedings of Machine Learning Research

SP - 1056

EP - 1066

BT - Proceedings of the 37th International Conference on Machine Learning

PB - PMLR

T2 - 37th International Conference on Machine Learning - ICML 2020

Y2 - 12 June 2020 through 18 June 2020

ER -

ID: 260666727