
Open access
Date
2020Type
- Conference Paper
ETH Bibliography
yes
Altmetrics
Abstract
In this paper, we introduce an actor-critic algorithm called Deep Value Model Predictive Control (DMPC), which combines model-based trajectory optimization with value function estimation. The DMPC actor is a Model Predictive Control (MPC) optimizer with an objective function defined in terms of a value function estimated by the critic. We show that our MPC actor is an importance sampler, which minimizes an upper bound of the cross-entropy to the state distribution of the optimal sampling policy. In our experiments with a Ballbot system, we show that our algorithm can work with sparse and binary reward signals to efficiently solve obstacle avoidance and target reaching tasks. Compared to previous work, we show that including the value function in the running cost of the trajectory optimizer speeds up the convergence. We also discuss the necessary strategies to robustify the algorithm in practice. Show more
Permanent link
https://doi.org/10.3929/ethz-b-000368961Publication status
publishedExternal links
Book title
Proceedings of the Conference on Robot LearningJournal / series
Proceedings of Machine Learning ResearchVolume
Pages / Article No.
Publisher
PMLREvent
Subject
ROBOTICS; REINFORCEMENT LEARNING (ARTIFICIAL INTELLIGENCE); Model Predictive Control (MPC)Organisational unit
09570 - Hutter, Marco / Hutter, Marco
More
Show all metadata
ETH Bibliography
yes
Altmetrics