Constrained parameters? Use Metropolis-Hastings

preview_player
Показать описание
This video explains the problem with naively running random walk Metropolis on constrained parameters and the remedy of using Metropolis-Hastings in these situations.

Рекомендации по теме
Комментарии
Автор

Why does multiplying by jumping distribution fix the issue ?

Dupamine
Автор

Why does metroppolis also not have the same jumping distribution problem?

Dupamine
Автор

Would you mind sharing application in Mathematica? And even if R codes is avaliable?

tugbakapucu
Автор

Great video once again. May I check that:
1. For the final simulation, the shape of the posterior distribution is arbitrary, though in this case, it is only designed to be right-skewed and close to zero, so that the biased-ness of the rejection sampling becomes apparent.

2. I understand that the vanilla metropolis is unbiased, whereas the rejection sampling variant is biased. Though computationally speaking, I have trouble seeing how they are different, since in both cases you reject negative samples of sigma and repeat the sampling process until you get a positive one. In short, following @4.22, I would have coded both approaches the same way.

3. Practically speaking, is it safe to neglect concerns of biasedness of the rejection sampling variant if the distribution is say far from zero and has large positive values?

Thank you for the work you're doing. I am a PhD student in statistics and machine learning who did his undergrad in Engineering, hence I am trying to pick up on graduate level statistics to bridge the gap. I am learning a lot (and quickly) from your resource!

inothernews
Автор

Hello Ben awesome video. I wonder if you might be willing to share some of your mathematica notebooks? I am starting to learn mathematica and it would be helpful.

davidsewell
Автор

Hi,

Thanks for these masterpiece videos. Please in what particular order can one watch your videos? I am very new to Bayesian and it hasn’t been easy for me

ikennaonyekwelu
Автор

Hi... what do you mean when you say "kernel"?

alejozen
Автор

This is a fantastic video.
I have just one question: how do I obtain the likelihood and prior?
I am guessing for likelihood you can calculate the product of the probability densities of each data point from N(mean, b_t).
What about the prior?

mikolajwojnicki
Автор

What the hell is the difference between the first and second proposal for sigma-t-prime in the Metropolis algorithm? Both looked exactly the same.

dariosilva