In this last lecture, we will discuss undirected generative models. Specifically we will look at the Restricted Boltzmann Machine and (to the extent that time permits) the Deep Boltzmann Machine.
Slides:
Reference: (* = you are responsible for this material)
- *Sections 20.1 to 20.4.4 (inclusively) of the Deep Learning textbook.
- Sections 17.3-17.4 (MCMC, Gibbs), chap. 19 (Approximate Inference) of the Deep Learning textbook.
Can we have slides of attention models please ?
LikeLike
I had published the page, but I had not categorized as a lecture so it didn’t show up in the right place. Sorry about that.
— Aaron
LikeLiked by 1 person
It seems that pages: 9, 10, 14 for the Restricted Boltzmann Machines slide are not readable.
LikeLike
Hi, is it possible to upload the new corrected slides ? Thanks
LikeLike
Sorry, I might have forgot to “update” the page. They should be the corrected slides now.
LikeLike
Like we saw in class, it’s possible to use an autoEncoder to pre-train the layers of a neural net, and then fine tune the parameters on another task (classification, for example). Is it possible to do the same thing with RBM? To pre-train an RBM and then convert the model for another task, like classification?
LikeLike
Its possible. For ex see this work Classification using Discriminative Restricted Boltzmann Machines
LikeLike
It seems you can. We saw parts of a nice paper ( http://www.jmlr.org/papers/volume11/erhan10a/erhan10a.pdf ) in the regularization lectures on this subject. I did not read it in detail yet, but they conclude that in the experiments performed both approaches present similar regularizing effect.
LikeLike
There is a paper, The Potential Energy of an Autoencoder, shows two interesting things:
1) Most common autoencoders are naturally associated with an energy function
2) For autoencoders with sigmoid hidden units, the energy function is identical to the free energy of an RBM
LikeLike