Provably Efficient Offline Reinforcement Learning in Regular Decision Processes

Publikation: Bidrag til bog/antologi/rapportKonferencebidrag i proceedingsForskningfagfællebedømt

Dokumenter

  • Fulltext

    Forlagets udgivne version, 645 KB, PDF-dokument

This paper deals with offline (or batch) Reinforcement Learning (RL) in episodic Regular Decision Processes (RDPs). RDPs are the subclass of Non-Markov Decision Processes where the dependency on the history of past events can be captured by a finite-state automaton. We consider a setting where the automaton that underlies the RDP is unknown, and a learner strives to learn a near-optimal policy using pre-collected data, in the form of non-Markov sequences of observations, without further exploration. We present RegORL, an algorithm that suitably combines automata learning techniques and state-of-the-art algorithms for offline RL in MDPs. RegORL has a modular design allowing one to use any off-the-shelf offline RL algorithm in MDPs. We report a non-asymptotic high-probability sample complexity bound for RegORL to yield an ε-optimal policy, which makes appear a notion of concentrability relevant for RDPs. Furthermore, we present a sample complexity lower bound for offline RL in RDPs. To our best knowledge, this is the first work presenting a provably efficient algorithm for offline learning in RDPs.
OriginalsprogEngelsk
TitelAdvances in Neural Information Processing Systems 36 (NeurIPS 2023)
Antal sider34
ForlagNeurIPS Proceedings
Publikationsdato2023
StatusUdgivet - 2023
Begivenhed37th Conference on Neural Information Processing Systems - NeurIPS 2023 - New Orleans., USA
Varighed: 10 dec. 202316 dec. 2023

Konference

Konference37th Conference on Neural Information Processing Systems - NeurIPS 2023
LandUSA
ByNew Orleans.
Periode10/12/202316/12/2023
NavnAdvances in Neural Information Processing Systems
Vol/bind36
ISSN1049-5258

ID: 386160160