Skip to yearly menu bar Skip to main content


Poster

Diagnosing Bottlenecks in Deep Q-learning Algorithms

Justin Fu · Aviral Kumar · Matthew Soh · Sergey Levine

Pacific Ballroom #44

Keywords: [ Deep Reinforcement Learning ]


Abstract:

Q-learning methods are a common class of algorithms used in reinforcement learning (RL). However, their behavior with function approximation, especially with neural networks, is poorly understood theoretically and empirically. In this work, we aim to experimentally investigate potential issues in Q-learning, by means of a "unit testing" framework where we can utilize oracles to disentangle sources of error. Specifically, we investigate questions related to function approximation, sampling error and nonstationarity, and where available, verify if trends found in oracle settings hold true with deep RL methods. We find that large neural network architectures have many benefits with regards to learning stability; offer several practical compensations for overfitting; and develop a novel sampling method based on explicitly compensating for function approximation error that yields fair improvement on high-dimensional continuous control domains.

Live content is unavailable. Log in and register to view live content