Performance guarantees for model-based Approximate Dynamic Programming in continuous spaces

Open access
Date
2020-01Type
- Journal Article
Abstract
We study both the value function and Q-function formulation of the Linear Programming approach to Approximate Dynamic Programming. The approach is model-based and optimizes over a restricted function space to approximate the value function or Q-function. Working in the discrete time, continuous space setting, we provide guarantees for the fitting error and online performance of the policy. In particular, the online performance guarantee is obtained by analyzing an iterated version of the greedy policy, and the fitting error guarantee by analyzing an iterated version of the Bellman inequality. These guarantees complement the existing bounds that appear in the literature. The Q-function formulation offers benefits, for example, in decentralized controller design, however it can lead to computationally demanding optimization problems. To alleviate this drawback, we provide a condition that simplifies the formulation, resulting in improved computational times. Show more
Permanent link
https://doi.org/10.3929/ethz-b-000388076Publication status
publishedExternal links
Journal / series
IEEE Transactions on Automatic ControlVolume
Pages / Article No.
Publisher
IEEESubject
Discrete-time systems; dynamic programming; infinite horizon optimal control; stochastic systemsOrganisational unit
03751 - Lygeros, John / Lygeros, John
Funding
787845 - Optimal control at large (EC)
More
Show all metadata