Skip to yearly menu bar Skip to main content


Poster

Multi-Object Representation Learning with Iterative Variational Inference

Klaus Greff · Raphael Lopez Kaufman · Rishabh Kabra · Nicholas Watters · Christopher Burgess · Daniel Zoran · Loic Matthey · Matthew Botvinick · Alexander Lerchner

Pacific Ballroom #24

Keywords: [ Unsupervised Learning ] [ Representation Learning ] [ Neuroscience and Cognitive Science ] [ Deep Generative Models ]


Abstract:

Human perception is structured around objects which form the basis for our higher-level cognition and impressive systematic generalization abilities. Yet most work on representation learning focuses on feature learning without even considering multiple objects, or treats segmentation as an (often supervised) preprocessing step. Instead, we argue for the importance of learning to segment and represent objects jointly. We demonstrate that, starting from the simple assumption that a scene is composed of multiple entities, it is possible to learn to segment images into interpretable objects with disentangled representations. Our method learns -- without supervision -- to inpaint occluded parts, and extrapolates to scenes with more objects and to unseen objects with novel feature combinations. We also show that, due to the use of iterative variational inference, our system is able to learn multi-modal posteriors for ambiguous inputs and extends naturally to sequences.

Live content is unavailable. Log in and register to view live content