Adaptation to Easy Data in Prediction with Limited Advice

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

We derive an online learning algorithm with improved regret guarantees for ``easy'' loss sequences. We consider two types of ``easiness'': (a) stochastic loss sequences and (b) adversarial loss sequences with small effective range of the losses. While a number of algorithms have been proposed for exploiting small effective range in the full information setting, Gerchinovitz and Lattimore [2016] have shown the impossibility of regret scaling with the effective range of the losses in the bandit setting. We show that just one additional observation per round is sufficient to bypass the impossibility result. The proposed \emph{Second Order Difference Adjustments} (SODA) algorithm requires no prior knowledge of the effective range of the losses, $\varepsilon$, and achieves an $O(\varepsilon \sqrt{KT \ln K}) + \tilde{O}(\varepsilon K \sqrt[4]{T})$ expected regret guarantee, where $T$ is the time horizon and $K$ is the number of actions. The scaling with the effective loss range is achieved under significantly weaker assumptions than those made by Cesa-Bianchi and Shamir [2018] in an earlier attempt to bypass the impossibility result. We also provide regret lower bound of $\Omega(\varepsilon\sqrt{T K})$, which almost matches the upper bound. In addition, we show that in the stochastic setting SODA achieves an $O\left(\sum_{a:\Delta_a>0} \frac{K\varepsilon^2}{\Delta_a}\right)$ pseudo-regret bound that holds simultaneously with the adversarial regret guarantee. In other words, SODA is safe against an unrestricted oblivious adversary and provides improved regret guarantees for at least two different types of ``easiness'' simultaneously.
Original languageEnglish
Title of host publicationProceedings of 32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada
PublisherNIPS Proceedings
Publication date2018
Article number10
Publication statusPublished - 2018
Event32nd Annual Conference on Neural Information Processing Systems - Montreal, Montreal, Canada
Duration: 2 Dec 20188 Dec 2018
Conference number: 32
https://nips.cc/Conferences/2018

Conference

Conference32nd Annual Conference on Neural Information Processing Systems
Nummer32
LocationMontreal
LandCanada
ByMontreal
Periode02/12/201808/12/2018
Internetadresse
SeriesAdvances in Neural Information Processing Systems
Volume31
ISSN1049-5258

ID: 225479757