25 – Undirected Generative Models

In this last lecture, we will discuss undirected generative models. Specifically we will look at the Restricted Boltzmann Machine and (to the extent that time permits) the Deep Boltzmann Machine.

Slides:

Reference: (* = you are responsible for this material)

  • *Sections 20.1 to 20.4.4 (inclusively) of the Deep Learning textbook.
  • Sections 17.3-17.4 (MCMC, Gibbs), chap. 19 (Approximate Inference) of the Deep Learning textbook.
Advertisements

24 – Attention and Memory

In this lecture, Dzmitry (Dima) Bahdanau will discuss attention and memory in neural networks.

Slides:

Reference: (* = you are responsible for this material)

23 – Autoregressive Generative Models

In these lectures, we discuss autoregressive generative models such as NADE, MADE, PixelCNN, PixelRNN, and the PixelVAE.

Slides:

Reference: (* = you are responsible for this material)

21/22 – GANs

In these lectures, at long last, we will discuss Generative Adversarial Networks (GANs). GANs are a recent and very popular generative model paradigm. We will discuss the GAN formalism, some theory and practical considerations.

Slides:

Reference: (* = you are responsible for this material)

18/20 – Variational Autoencoders

In this lecture we will finish up our discussion of sparse coding and start our discussion of variational autoencoders (VAEs). VAEs are the first of the generative models that we will study. We will see how they modify the standard autoencoder reconstruction loss to create a well-defined generative model with clear probabilistic semantics.

Slides:

Reference: (* = you are responsible for this material)

  • *Sections 20.10.1-20.10.3 of the Deep Learning textbook.
  • Diederik P Kingma, Max Welling, Auto-Encoding Variational Bayes published in the International Conference on Learning Representations (ICLR) 2014.
  • Other reference are provided in the slides.