Exploring Constrained Reinforcement Learning Algorithms for Quadrupedal Locomotion
Abstract
Shifting from traditional control strategies to Deep Reinforcement Learning (RL) for legged robots poses inherent challenges, especially when addressing real-world physical con straints during training. While high-fidelity simulations provide significant benefits, they often bypass these essential physical limitations. In this paper, we experiment with the Constrained Markov Decision Process (CMDP) framework instead of the conventional unconstrained RL for robotic applications. We evaluated five constrained policy optimization algorithms for quadrupedal locomotion using three different robot models. Our aim is to evaluate their applicability in real-world sce narios. Our robot experiments demonstrate the critical role of incorporating physical constraints, yielding successful sim-to-real transfers, and reducing operational errors on physical systems. The CMDP formulation streamlines the training process by separately handling constraints from rewards. Our findings underscore the potential of constrained RL for the effective development and deployment of learned controllers in robotics. Show more
Permanent link
https://doi.org/10.3929/ethz-b-000703818Publication status
publishedExternal links
Book title
2024 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)Pages / Article No.
Publisher
IEEEEvent
Organisational unit
09570 - Hutter, Marco / Hutter, Marco
Funding
852044 - Learning Mobility for Real Legged Robots (EC)
More
Show all metadata
ETH Bibliography
yes
Altmetrics