Skip to yearly menu bar Skip to main content


Poster

Infinite Mixture Prototypes for Few-shot Learning

Kelsey Allen · Evan Shelhamer · Hanul Shin · Josh Tenenbaum

Pacific Ballroom #87

Keywords: [ Transfer and Multitask Learning ] [ Semi-supervised learning ] [ Representation Learning ] [ Others ] [ Metric Learning ]


Abstract:

We propose infinite mixture prototypes to adaptively represent both simple and complex data distributions for few-shot learning. Infinite mixture prototypes combine deep representation learning with Bayesian nonparametrics, representing each class by a set of clusters, unlike existing prototypical methods that represent each class by a single cluster. By inferring the number of clusters, infinite mixture prototypes interpolate between nearest neighbor and prototypical representations in a learned feature space, which improves accuracy and robustness in the few-shot regime. We show the importance of adaptive capacity for capturing complex data distributions such as super-classes (like alphabets in character recognition), with 10-25% absolute accuracy improvements over prototypical networks, while still maintaining or improving accuracy on standard few-shot learning benchmarks. By clustering labeled and unlabeled data with the same rule, infinite mixture prototypes achieve state-of-the-art semi-supervised accuracy, and can perform purely unsupervised clustering, unlike existing fully- and semi-supervised prototypical methods.

Live content is unavailable. Log in and register to view live content