Tsallis-INF for decoupled exploration and exploitation in multi-armed bandits

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

Documents

  • Fulltext

    Final published version, 306 KB, PDF document

We consider a variation of the multi-armed bandit problem, introduced by Avner et al. (2012), in which the forecaster is allowed to choose one arm to explore and one arm to exploit at every round. The loss of the exploited arm is blindly suffered by the forecaster, while the loss of the explored arm is observed without being suffered. The goal of the learner is to minimize the regret. We derive a new algorithm using regularization by Tsallis entropy to achieve best of both worlds guarantees. In the adversarial setting we show that the algorithm achieves the minimax optimal O(KT−−−√) regret bound, slightly improving on the result of Avner et al.. In the stochastic regime the algorithm achieves a time-independent regret bound, significantly improving on the result of Avner et al.. The algorithm also achieves the same time-independent regret bound in the more general stochastically constrained adversarial regime introduced by Wei and Luo (2018).
Original languageEnglish
Title of host publicationProceedings of Thirty Third Conference on Learning Theory(COLT)
PublisherPMLR
Publication date2020
Pages3227-3249
Publication statusPublished - 2020
SeriesProceedings of Machine Learning Research
Volume125
ISSN1938-7228

Links

ID: 272647540