Metadata only
Date
2022-10Type
- Journal Article
Abstract
The process of robot design is a complex task and the majority of design decisions are still based on human intuition or tedious manual tuning. A more informed way of facing this task is computational design methods where design parameters are concurrently optimized with corresponding controllers. Existing approaches, however, are strongly influenced by predefined control rules or motion templates and cannot provide end-to-end solutions. In this paper, we present a design optimization framework using model-free meta reinforcement learning, and its application to the optimizing kinematics and actuator parameters of quadrupedal robots. We use meta reinforcement learning to train a locomotion policy that can quickly adapt to different designs. This policy is used to evaluate each design instance during the design optimization. We demonstrate that the policy can control robots of different designs to track random velocity commands over various rough terrains. With controlled experiments, we show that the meta policy achieves close-to-optimal performance for each design instance after adaptation. Lastly, we compare our results against a model-based baseline and show that our approach allows higher performance while not being constrained by predefined motions or gait patterns. Show more
Publication status
publishedExternal links
Journal / series
IEEE Robotics and Automation LettersVolume
Pages / Article No.
Publisher
IEEESubject
Reinforcement Learning; Mechanism Design; Legged RobotsFunding
852044 - Learning Mobility for Real Legged Robots (EC)
More
Show all metadata