- Conference Paper
Meta-learning, transfer learning and multi-task learning have recently laid a path towards more generally applicable reinforcement learning agents that are not limited to a single task. However, most existing approaches implicitly assume a uniform similarity between tasks. We argue that this assumption is limiting in settings where the relationship between tasks is unknown a-priori. In this work, we propose a general approach to automatically cluster together similar tasks during training. Our method, inspired by the expectation maximization algorithm, succeeds at finding clusters of related tasks and uses these to improve sample complexity. In the expectation step, we evaluate the performance of a set of policies on all tasks and assign each task to the best performing policy. In the maximization step, each policy trains by sampling tasks from its assigned set. This method is intuitive, simple to implement and orthogonal to other multi-task learning algorithms. We show the generality of our approach by evaluating on simple discrete and continuous control tasks, as well as complex bipedal walker tasks and Atari games. Results show improvements in sample complexity as well as a more general applicability when compared to other approaches. Show more
Book titleDeep Reinforcement Learning Workshop, NeurIPS 2020. Accepted Papers
PublisherDeep RL Workshop
Organisational unit03604 - Wattenhofer, Roger / Wattenhofer, Roger
NotesDue to the Coronavirus (COVID-19) the workshop was conducted virtually.
MoreShow all metadata