Metadata only
Date
2021-01-01Type
- Conference Paper
ETH Bibliography
yes
Altmetrics
Abstract
We propose a new reinforcement learning algorithm derived from a regularized linear-programming formulation of optimal control in MDPs. The method is closely related to the classic Relative Entropy Policy Search (REPS) algorithm of Peters et al. (2010), with the key difference that our method introduces a Q-function that enables efficient exact model-free implementation. The main feature of our algorithm (called Q-REPS) is a convex loss function for policy evaluation that serves as a theoretically sound alternative to the widely used squared Bellman error. We provide a practical saddle-point optimization method for minimizing this loss function and provide an error-propagation analysis that relates the quality of the individual updates to the performance of the output policy. Finally, we demonstrate the effectiveness of our method on a range of benchmark problems. Show more
Publication status
publishedExternal links
Book title
Proceedings of the 24th International Conference on Artificial Intelligence and Statistics (AISTATS 2021)Journal / series
Proceedings of Machine Learning ResearchVolume
Pages / Article No.
Publisher
PMLREvent
Organisational unit
03908 - Krause, Andreas / Krause, Andreas
Funding
815943 - Reliable Data-Driven Decision Making in Cyber-Physical Systems (EC)
More
Show all metadata
ETH Bibliography
yes
Altmetrics