Gibbs sampling

preview_player
Показать описание
A minilecture describing Gibbs sampling.
Рекомендации по теме
Комментарии
Автор

This is the clearest explanation of Gibbs sampling out there. Thanks.

TheDebidatta
Автор

Seeing the algorithm "walk" around the plane really made it click for me. Also, this made clear why we need to find every single conditional first. Thank you for the great work!

jacobschultz
Автор

The example with the 2D Gaussian was invaluable to ground my understanding - thank you!

TheAIEpiphany
Автор

Clear explanation, clear voice, clear slides. Good Job! Thanks!

tg
Автор

Just wondering if randomizing the order of sampling (like instead of going \theta_1, ... \theta_K, you do a random permutation of K) will help here? Is there a particular reason why we sample in this order?

jiagengliu
Автор

Amazing video! I was struggling with it and now I understand. Thank you so much!

jessicas
Автор

Thank you very much! This is nicely and simply explained.

bunnysm
Автор

Gibbs sampling is just one special example of MH

GooseGood
Автор

Thanks for the great and clear video! One question though: starting at 3:11, why do the normal conditional distributions have mean of rho*theta and variance of [1-rho^2]? Where is this parameterization coming from?

atjebb
Автор

Thanks for the video. I understood the concept but I am not an expert in probability. I know what conditional probability is. I am still struggling to figure out what it means when you sample Theta_1 given Theta_2, ..., Theta_K etc. and would not be able to explain why it's working if someone asked me. In the example of p(Theta) ~ N(0, Sigma), Theta_1 ~ N(ro x Theta_2(0), [1 - ro^2]). Here the covariance matrix became variance [1 - ro^2] and mean became ro x Theta_2(0). Where does it comes from?

ZbiggySmall
Автор

Are all the samples accepted contrary to metropolis Hastings algorithm? Cool video

UlrichArmel
Автор

very good explanation!! really helpful, thanks:)

mingyanchu
Автор

Great explenation! Not to nitpick but from 2.14 the f(\theta) should be p(\theta) or am I looking at it the wrong way ? If not, what's the f ? 
Nonetheless really awesome explenation! Thanks! 

philipruijten
Автор

I think the vector theta N contains Theta 1 and Theta 2 is a marginal distribution of Bi variate Normal when we plot a histogram with Theta 1 data
what it tell us ? The mean of Theta 1 is the value we want to estimate?

ccuuttww
Автор

Nice lecture. Pretty clear explanation

weixuanli
Автор

Btw, how did you derive the conditional distributions from the joint? When I wrote out the full analytic form of the joint PDF divided by one of the single variable PDFs, the equation did not simplify easily.

jacobmoore
Автор

I have seen some paper and codes where they have used more complicated conditional probability. e.g sometimes, some pieces of their conditional probability contain gamma distribution. can you make some comment on how to advance a gibbs sampling ?

Xnaarkhoo
Автор

Thanks a lot. Very good explanation.

boshranabaei
Автор

I'm a bit too green on this so sorry, but how did you construct the conditioning? ie how did you go from theta_1|theta_2 to N(pTheta_1, i=p2)
EDIT: should've researched before, this seems like a solved problem, just plug the numbers.

Carutsu
Автор

If possible can you please share the link for ppts. Thanks!

SrikantGadicherla