Learning Q-function approximations for hybrid control problems


Loading...

Date

2022

Publication Type

Journal Article

ETH Bibliography

yes

Citations

Altmetric

Data

Abstract

The main challenge in controlling hybrid systems arises from having to consider an exponential number of sequences of future modes to make good long-term decisions. Model predictive control (MPC) computes a control action through a finite-horizon optimisation problem. A key ingredient in this problem is a terminal cost, to account for the system’s evolution beyond the chosen horizon. A good terminal cost can reduce the horizon length required for good control action and is often tuned empirically by observing performance. We build on the idea of using N -step Q -functions ( Q(N) ) in the MPC objective to avoid having to choose a terminal cost. We present a formulation incorporating the system dynamics and constraints to approximate the optimal Q(N) -function and algorithms to train the approximation parameters through an exploration of the state space. We test the control policy derived from the trained approximations on two benchmark problems through simulations and observe that our algorithms are able to learn good Q(N) -approximations for hybrid systems with dimensions of practical relevance based on a relatively small data-set. We compare our controller’s performance against that of Hybrid MPC in terms of computation time and closed-loop costs.

Publication status

published

Editor

Book title

Volume

6

Pages / Article No.

1364 - 1369

Publisher

IEEE

Event

Edition / version

Methods

Software

Geographic location

Date collected

Date created

Subject

Organisational unit

03751 - Lygeros, John / Lygeros, John check_circle

Notes

Funding

787845 - Optimal control at large (EC)

Related publications and datasets