Skip to yearly menu bar Skip to main content


Poster

DiffUCO: A Diffusion Model Framework for Unsupervised Neural Combinatorial Optimization

Sebastian Sanokowski · Sepp Hochreiter · Sebastian Lehner


Abstract:

To approximate intractable distributions by neural networks without data has recently attracted interest in the context of Combinatorial Optimization and many other scientific applications. Current data-free approximation methods rely on exact likelihood models that allow for the evaluation of the marginal probability for each sample, which strongly restricts the set of possible models. Prominent examples of such models are Autoregressive Models and Normalizing Flows. Whereas approximate likelihood models like Variational Autoencoders or Diffusion Models are very popular in data-based approximation, in the data-free setting the application of these Latent Variable Models remains an open problem because exact evaluation of their sample probabilities is not possible.We introduce a new method that empowers these approximate likelihood models to be used for data-free optimization although the sample probabilities cannot be computed. Our method relies on an upper bound of the reverse Kullback-Leibler divergence which enables the application of highly expressive Diffusion Models to data-free optimization problems. We experimentally validate our approach in data-free Combinatorial Optimization and show that our method outperforms recently published autoregressive methods on many Combinatorial Optimization datasets and achieves a new state-of-the-art.

Live content is unavailable. Log in and register to view live content