Skip to yearly menu bar Skip to main content


Poster

Learning Robot Skills with Temporal Variational Inference

Tanmay Shankar · Abhinav Gupta

Keywords: [ Applications - Other ] [ Unsupervised Learning ] [ Robotics ] [ Deep Reinforcement Learning ]


Abstract:

In this paper, we address the discovery of robotic options from demonstrations in an unsupervised manner. Specifically, we present a framework to jointly learn low-level control policies and higher-level policies of how to use them from demonstrations of a robot performing various tasks. By representing options as continuous latent variables, we frame the problem of learning these options as latent variable inference. We then present a temporally causal variant of variational inference based on a temporal factorization of trajectory likelihoods, that allows us to infer options in an unsupervised manner. We demonstrate the ability of our framework to learn such options across three robotic demonstration datasets.

Chat is not available.