Skip to yearly menu bar Skip to main content


Poster

Learning from a Learner

alexis jacq · Matthieu Geist · Ana Paiva · Olivier Pietquin

Pacific Ballroom #110

Keywords: [ Theory and Algorithms ] [ Multiagent Learning ] [ Deep Reinforcement Learning ]


Abstract:

In this paper, we propose a novel setting for Inverse Reinforcement Learning (IRL), namely "Learning from a Learner" (LfL). As opposed to standard IRL, it does not consist in learning a reward by observing an optimal agent but from observations of another learning (and thus sub-optimal) agent. To do so, we leverage the fact that the observed agent's policy is assumed to improve over time. The ultimate goal of this approach is to recover the actual environment's reward and to allow the observer to outperform the learner. To recover that reward in practice, we propose methods based on the entropy-regularized policy iteration framework. We discuss different approaches to learn solely from trajectories in the state-action space. We demonstrate the genericity of our method by observing agents implementing various reinforcement learning algorithms. Finally, we show that, on both discrete and continuous state/action tasks, the observer's performance (that optimizes the recovered reward) can surpass those of the observed agent.

Live content is unavailable. Log in and register to view live content