Skip to yearly menu bar Skip to main content


Poster

Improving Diffusion Models for Inverse Problems Using Optimal Posterior Covariance

Xinyu Peng · Ziyang Zheng · Wenrui Dai · Nuoqian Xiao · Chenglin Li · Junni Zou · Hongkai Xiong


Abstract:

Recent diffusion models provide a promising alternative zero-shot solution to noisy linear inverse problems without retraining for specific inverse problems. In this paper, we propose the first unified interpretation for zero-shot methods from the perspective of approximating the conditional posterior mean for the reverse diffusion process of conditional sampling. We reveal that recent methods are equivalent to making isotropic Gaussian approximations to intractable posterior distributions over clean images given diffused noisy images, with only a difference in the handcrafted design of isotropic posterior covariances. Inspired by this finding, we propose to improve recent methods with posterior covariance optimization based on maximum likelihood estimation. To achieve optimal posterior covariance without retraining, we provide general solutions based on two approaches specifically designed for training-free posterior covariance optimization: using pre-trained models with and without reverse covariances. Experimental results demonstrate that the proposed methods significantly enhance the overall performance or robustness to hyperparameters of recent methods.

Live content is unavailable. Log in and register to view live content