Skip to yearly menu bar Skip to main content


Poster

Efficient Mixture Learning in Black-Box Variational Inference

Alexandra Hotti · Oskar Kviman · Ricky Molén · Víctor Elvira · Jens Lagergren


Abstract:

Mixture variational distributions in black box variational inference (BBVI) have demonstrated impressive results in challenging density estimation tasks. However, scaling the number of mixture components can lead to a linear increase in the number of learnable parameters, and a quadratic increase in inference time due to the evaluation of the evidence lower bound (ELBO).Our two key contributions address these limitations. First, we introduce the novel Multiple Importance Sampling Variational Autoencoder (MISVAE), which amortizes the mapping from input to mixture-parameter space using one-hot encodings. With MISVAE, each additional mixture component incurs a negligible increase in network parameters. Second, we construct two new estimators of the ELBO for mixtures in BBVI, enabling a tremendous reduction in inference time with marginal or even improved impact on performance. Collectively, our contributions enable scalability to hundreds of mixture components and superior estimation performances in shorter time and with less network parameters compared to previous Mixture VAEs. Experimenting with MISVAE, we achieve astonishing, SOTA results on MNIST and compare with popular models on CIFAR-10. Furthermore, we empirically validate our estimators in other BBVI settings, including Bayesian phylogenetic inference, where we improve inference times for the SOTA mixture model on eight real phylogenetic data sets.

Live content is unavailable. Log in and register to view live content