Doctoral Thesis: Guiding Deep Probabilistic Models

Thursday, June 13
11:00 am - 12:30 pm

32-G882 (Hewlett)

By: Timur Garipov

Supervisor: Tommi Jaakkola


  • Date: Thursday, June 13
  • Time: 11:00 am - 12:30 pm
  • Category:
  • Location: 32-G882 (Hewlett)
Additional Location Details:

The defense will be held in a hybrid format. If you are interested in joining over Zoom, please contact Timur Garipov ( for the Zoom meeting details.

Deep probabilistic models utilize deep neural networks to learn probability distributions in high-dimensional data spaces. Learning and inference in these models are complicated due to the difficulty of direct evaluation of the differences between the model distribution and the target. This thesis addresses this challenge and develops novel algorithms for learning and inference based on the guidance of complex parameterized distributions towards desired configurations via signals from auxiliary discriminative models.

In the first part of the thesis, we develop novel stable training objectives for Generative Adversarial Networks (GANs). We show that under standard unary-discriminator objectives, most of the valid solutions, where the learned distribution is aligned with the target, are unstable. We propose training objectives based on pairwise discriminators that provably preserve distribution alignment and demonstrate improved training stability in image generation tasks.

In the second part of the thesis, we introduce distribution support alignment as an alternative to the distribution alignment objective and develop a learning algorithm that guides distributions towards support alignment. We demonstrate the effectiveness of our approach in unsupervised domain adaptation under label distribution shift. Recent works have shown that under cross-domain label distribution shift, optimizing for distribution alignment is excessively restrictive and causes performance degradation. Our algorithm, which is based on support alignment, alleviates this issue.

In the third part of the thesis, we develop a novel approach to compositional generation in iterative generative processes: diffusion models and Generative Flow Networks (GFlowNets). Motivated by the growing prominence of generative models pre-trained at scale and the high training costs, we propose composition operations and guidance-based sampling algorithms that enable the combination of multiple pre-trained iterative generative processes. We offer empirical results on image and molecular generation tasks.

Committee Members:
Tommi Jaakkola (advisor, MIT)
Phillip Isola (MIT)
Samuel Kaski (Aalto University, University of Manchester)