Reinforcement Learning with General Utilities: Simpler Variance Reduction and Large State-Action Space


Loading...

Date

2023

Publication Type

Conference Paper

ETH Bibliography

yes

Citations

Altmetric

Data

Abstract

We consider the reinforcement learning (RL) problem with general utilities which consists in maximizing a function of the state-action occupancy measure. Beyond the standard cumulative reward RL setting, this problem includes as particular cases constrained RL, pure exploration and learning from demonstrations among others. For this problem, we propose a simpler single-loop parameter-free normalized policy gradient algorithm. Implementing a recursive momentum variance reduction mechanism, our algorithm achieves $\tilde{O}(\epsilon^{-3})$ and $\tilde{O}(\epsilon^{-2})$ sample complexities for $\epsilon$-first-order stationarity and $\epsilon$-global optimality respectively, under adequate assumptions. We further address the setting of large finite state action spaces via linear function approximation of the occupancy measure and show a $\tilde{O}(\epsilon^{-4})$ sample complexity for a simple policy gradient method with a linear regression subroutine.

Publication status

published

Book title

Proceedings of the 40th International Conference on Machine Learning

Volume

202

Pages / Article No.

1753 - 1800

Publisher

PMLR

Event

40th International Conference on Machine Learning (ICML 2023)

Edition / version

Methods

Software

Geographic location

Date collected

Date created

Subject

reinforcement learning; policy gradient methods; convex RL; Global convergence

Organisational unit

09729 - He, Niao / He, Niao check_circle
02219 - ETH AI Center / ETH AI Center

Notes

Funding

Related publications and datasets