25 – Undirected Generative Models

In this last lecture, we will discuss undirected generative models. Specifically we will look at the Restricted Boltzmann Machine and (to the extent that time permits) the Deep Boltzmann Machine.


Reference: (* = you are responsible for this material)

  • *Sections 20.1 to 20.4.4 (inclusively) of the Deep Learning textbook.
  • Sections 17.3-17.4 (MCMC, Gibbs), chap. 19 (Approximate Inference) of the Deep Learning textbook.

9 thoughts on “25 – Undirected Generative Models

  1. Like we saw in class, it’s possible to use an autoEncoder to pre-train the layers of a neural net, and then fine tune the parameters on another task (classification, for example). Is it possible to do the same thing with RBM? To pre-train an RBM and then convert the model for another task, like classification?


  2. There is a paper, The Potential Energy of an Autoencoder, shows two interesting things:
    1) Most common autoencoders are naturally associated with an energy function
    2) For autoencoders with sigmoid hidden units, the energy function is identical to the free energy of an RBM


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s