Probabilistic model predictive safety certification for learning-based control
Metadata only
Date
2022-01Type
- Journal Article
Citations
Cited 21 times in
Web of Science
Cited 22 times in
Scopus
ETH Bibliography
yes
Altmetrics
Abstract
Reinforcement learning (RL) methods have demonstrated their efficiency in simulation environments. However, many applications for which RL offers great potential, such as autonomous driving, are also safety critical and require a certified closed-loop behavior in order to meet safety specifications in the presence of physical constraints. This paper introduces a concept called probabilistic model predictive safety certification (PMPSC), which can be combined with any RL algorithm and provides provable safety certificates in terms of state and input chance constraints for potentially large-scale systems. The certificate is realized through a stochastic tube that safely connects the current system state with a terminal set of states that is known to be safe. A novel formulation allows a recursively feasible real-time computation of such probabilistic tubes, despite the presence of possibly unbounded disturbances. A design procedure for PMPSC relying on Bayesian inference and recent advances in probabilistic set invariance is presented. Using a numerical car simulation, the method and its design procedure are illustrated by enhancing an RL algorithm with safety certificates. Show more
Publication status
publishedExternal links
Journal / series
IEEE Transactions on Automatic ControlVolume
Pages / Article No.
Publisher
IEEESubject
Reinforcement learning (RL); Stochastic systems; Predictive control; SafetyOrganisational unit
09563 - Zeilinger, Melanie / Zeilinger, Melanie
Funding
157601 - Safety and Performance for Human in the Loop Control (SNF)
141853 - Digital Fabrication - Advanced Building Processes in Architecture (SNF)
More
Show all metadata
Citations
Cited 21 times in
Web of Science
Cited 22 times in
Scopus
ETH Bibliography
yes
Altmetrics