Skip to yearly menu bar Skip to main content


Poster

Stochastic Q-learning for Large Discrete Action Spaces

Fares Fourati · Vaneet Aggarwal · Mohamed-Slim Alouini


Abstract: In complex environments with large discrete action spaces, effective decision-making is critical in reinforcement learning (RL). Despite the widespread use of value-based approaches, like Q-learning, they come with a computational burden, necessitating the maximization of a value function over all actions in each iteration. This burden becomes particularly challenging when addressing large-scale problems and using deep neural networks as function approximates. In this paper, we present stochastic value-based approaches that concentrate on a variable stochastic set of at most $\mathcal{O}(\log(n))$ actions, as opposed to optimizing over the entire set of $n$ actions. The presented stochastic value-based methods include, among others, Stochastic Q-learning, StochDQN, and StochDDQN, all of which integrate this stochastic approach for both value-function updates and action selection. The theoretical convergence of Stochastic Q-learning is established, and an analysis of stochastic maximization is provided. Moreover, through empirical validation, we illustrate that the various proposed approaches outperform baselines across diverse environments, including control problems, achieving optimal average returns in significantly reduced time.

Live content is unavailable. Log in and register to view live content