Reinforcement Learning with General Utilities: Simpler Variance Reduction and Large State-Action Space
OPEN ACCESS
Loading...
Author / Producer
Date
2023
Publication Type
Conference Paper
ETH Bibliography
yes
Citations
Altmetric
OPEN ACCESS
Data
Rights / License
Abstract
We consider the reinforcement learning (RL) problem with general utilities which consists in maximizing a function of the state-action occupancy measure. Beyond the standard cumulative reward RL setting, this problem includes as particular cases constrained RL, pure exploration and learning from demonstrations among others. For this problem, we propose a simpler single-loop parameter-free normalized policy gradient algorithm. Implementing a recursive momentum variance reduction mechanism, our algorithm achieves $\tilde{O}(\epsilon^{-3})$ and $\tilde{O}(\epsilon^{-2})$ sample complexities for $\epsilon$-first-order stationarity and $\epsilon$-global optimality respectively, under adequate assumptions. We further address the setting of large finite state action spaces via linear function approximation of the occupancy measure and show a $\tilde{O}(\epsilon^{-4})$ sample complexity for a simple policy gradient method with a linear regression subroutine.
Permanent link
Publication status
published
Book title
Proceedings of the 40th International Conference on Machine Learning
Journal / series
Volume
202
Pages / Article No.
1753 - 1800
Publisher
PMLR
Event
40th International Conference on Machine Learning (ICML 2023)
Edition / version
Methods
Software
Geographic location
Date collected
Date created
Subject
reinforcement learning; policy gradient methods; convex RL; Global convergence
Organisational unit
09729 - He, Niao / He, Niao
02219 - ETH AI Center / ETH AI Center