Skip to yearly menu bar Skip to main content


Poster

Mixtures of Experts Unlock Parameter Scaling for Deep RL

Johan Obando Ceron · Ghada Sokar · Timon Willi · Clare Lyle · Jesse Farebrother · Jakob Foerster · Gintare Karolina Dziugaite · Doina Precup · Pablo Samuel Castro


Abstract:

The recent rapid progress in (self) supervised learning models is in large part predicted by empirical scaling laws: a model's performance scales proportionally to its size. Analogous scaling laws remain elusive for reinforcement learning domains, however, where increasing the parameter count of a model often hurts its final performance. In this paper, we demonstrate that incorporating Mixture-of-Expert (MoE) modules, and in particular Soft MoEs (Puigcerver et al., 2023), into value-based networks results in more parameter-scalable models, evidenced by substantial performance increases across a variety of training regimes and model sizes. This work thus provides strong empirical evidence towards developing scaling laws for reinforcement learning.

Live content is unavailable. Log in and register to view live content