University of Cambridge > Talks.cam > Isaac Newton Institute Seminar Series > Diffusion modelling for amortised inference

Diffusion modelling for amortised inference

Add to your list(s) Download to your calendar using vCal

If you have a question about this talk, please contact nobody.

RCLW03 - Accelerating statistical inference and experimental design with machine learning

This talk will survey recent work, by me and others, on the use of diffusion models as amortised variational posteriors. While diffusion models are classically trained to maximise a variational bound on dataset likelihood, their expressive power can also be harnessed to approximate posterior distributions over latent variables where no unbiased samples are available – that is, amortised Bayesian inference – and to approximately solve the related problem of sampling posteriors under diffusion model priors. The ensuing learning problem has close connections to stochastic optimal control and can be solved using a variety of learning-based and Monte Carlo approaches. After introducing these algorithms and connections, I will present recent results on the use of techniques from deep reinforcement learning in diffusion sampling and on connections with (twisted) sequential Monte Carlo. Applications include high-dimensional inverse problems in astrophysics and biology, constrained sampling in large generative models, inference of stochastic dynamical systems, and black-box Bayesian optimisation.

This talk is part of the Isaac Newton Institute Seminar Series series.

Tell a friend about this talk:

This talk is included in these lists:

Note that ex-directory lists are not shown.

 

© 2006-2025 Talks.cam, University of Cambridge. Contact Us | Help and Documentation | Privacy and Publicity