Neural networks [5.1] : Restricted Boltzmann machine - definition

preview_player
Показать описание

Рекомендации по теме
Комментарии
Автор

Used this lecture to understand lecture given by Jeffrey Hinton on NN course at Coursera, Thanks a ton, you saved me again.

gautamkarmakar
Автор

Thanks for the explanation on the energy function. Everything suddenly starts to make sense.

TheReficul
Автор

great lecture Hugo, thanks for putting all this hard work into it - very well taught too!

peterd
Автор

I really enjoyed watching this video. As I'm working on a project about DBN, this video is very useful for me. Thanks.

yifanli
Автор

@Jim O' Donoghue The numerator can be seen as exp(b+c) = exp(b).exp(c) (associativity of multiplications). When you apply this to the numerator it turns out to be the equation mentioned at 10:26.

RudramurthyV
Автор

Don't really get why the numerator turns into a product @around 10.26...

JimODonoghue
Автор

c transpose and b transpose are the biases for the hidden and visible nodes

nigeldupaigel
Автор

thank you so much, your lecture are awesome

osamahabdullah
Автор

Many thanks for the lecture, found it really useful. I'm a bit confused about the notation on the slide entitled Markov Network View though. Firstly, have you split the equation onto multiple lines just to make it a bit more readable? Or is it a significant that the unary factors are on different lines to the pairwise factors. Secondly, from my understanding of MNs, a distribution can be written as a product of the potentials defined by the cliques on the graph (up to a normalising constant). Since it's a pairwise MN I can see that the pair-wise factors are represented in the graph but I can't see where the unary factors are represented. What am I missing?

dombat
Автор

I think the use of single alphabet characters in formulas really obfuscate the meaning. we have much larger screens now and nobody do these calculation by hand, so why can't we just use the full word, or at least shorten it in a more meaningful way? like instead of Bj Ck, do Bhi (ith bias of hidden unit) and Bvj(jth bias of visible unit) or something like what we do in programming, vu[i].bias and hu[j].bias

XiaosChannel
Автор

What an awesome job! Thanks for Ur lecture

ylrtcef
Автор

I can't see what the vectors *c* and *b* are.
I watched the videos of the series on autoencoders first and I understood them but I didn't watch the videos preceding this one. Did I miss something ?

janvonschreibe
Автор

10:07, I have a problem where to put Bayesian Networks and HMMs. Do they belong to unsupvised learning like in the video above, or supervised, or do they simply present an own category in Machine Learning?

randywelt
Автор

I am attempting to understand deep autoencoders. I've followed chapters 1 and 2. Can I omit chapters 3 and 4 (on CRFs)?

pi
Автор

How can I decide a cut off point for RBM results in the case of unsupervised learning.?

Автор

plz give a link to next video, so we could understand what is this all for

stivstivsti
Автор

Dear Hugo. I am implementing RBM but I find out that energy function of joint probability seems so confusing 6:10
E(x, h) = - ( Wi, j * hj * xk + sum of all (dot product of visible unit value times bias) + sum of all (dot product of hidden unit value time biases).
Therefore, how can we define Z since all of the value of visible and hidden units have been used in E(x, h) ???

minh
Автор

Thank you professor for this video. I have two questions:
1-How to compute the distribution of the input vector x for instance p(1, 0, 1)?
2- Is it possible to feed the RBM with a multivariate input vector i.e. possible value for each visible unit are {0, 1, 2, 3}?

thank you in advance

mahmoudalbardan
Автор

Awesome lecture! What softwares did you use for making this video?

anchitbhattacharya
Автор

Thank you, Hugo. Do you recommend any books on neural networks?

MatthewKleinsmith