Skip to yearly menu bar Skip to main content


Poster

Amortized Variational Deep Kernel Learning

Alan Matias · César Lincoln Mattos · Joao Paulo Gomes · Diego Mesquita


Abstract:

Deep kernel learning (DKL) marries the uncertainty quantification of Gaussian processes (GPs) and the representational power of deep neural networks. However, training DKL is challenging and often leads to overfitting. Most notably, DKL often learns "non-local" kernels --- incurring spurious correlations. To remedy this pathology, we propose using amortized inducing points and a parameter-sharing scheme, which ties together the amortization and DKL networks. This design imposes an explicit dependency between the ELBO's model fit and capacity terms. In turn, this prevents the former from dominating the optimization procedure and incurring the aforementioned spurious correlations. Extensive experiments show that our resulting method, amortized varitional DKL (AVDKL), i) consistently outperforms DKL and standard GPs for tabular data; ii) achieves significantly higher accuracy than DKL in node classification tasks; and iii) leads to substantially better accuracy and negative log-likelihood than DKL on CIFAR100.

Live content is unavailable. Log in and register to view live content