Skip to yearly menu bar Skip to main content


Poster

Self-Composing Policies for Scalable Continual Reinforcement Learning

Mikel Malagón · Josu Ceberio · Jose A Lozano


Abstract:

This work introduces a growable and modular neural network architecture that naturally avoids catastrophic forgetting and interference in continual reinforcement learning. The structure of each module allows the selective combination of previous policies along with its internal policy accelerating the learning process on the current task. Unlike previous growing neural network approaches, we show that the number of parameters of the proposed approach grows linearly with respect to the number of tasks, and does not sacrifice plasticity to scale. Experiments conducted in benchmark continuous control and visual problems reveal that the proposed approach achieves greater knowledge transfer and performance than alternative methods.

Live content is unavailable. Log in and register to view live content