Understanding Metropolis-Hastings algorithm

preview_player
Показать описание

Metropolis-Hastings is an algorithm that allows us to sample from a generic probability distribution, which we'll call our target distribution, even if we don't know the normalizing constant. To do this, we construct and sample from a Markov chain whose stationary distribution is the target distribution that we're looking for. It consists of picking an arbitrary starting value and then iteratively accepting or rejecting candidate samples drawn from another distribution, one that is easy to sample. Let's say we want to produce samples from a target distribution. We're going to call it p of theta. But we only know it up to a normalizing constant or up to proportionality. What we have is g of theta. So we don't know the normalizing constant because perhaps this is difficult to integrate. So we only have g of theta to work with. The Metropolis Hastings Algorithm will proceed as follows. The first step is to select an initial value for theta. We're going to call it theta-naught. The next step is for a large number of iterations, so for i from 1 up to some large number m, we're going to repeat the following. The first thing we're going to do is draw a candidate. We'll call that theta-star as our candidate. And we're going to draw this from a proposal distribution. We're going to call the proposal distribution q of theta-star, given the previous iteration's value of theta. We'll take more about this q distribution soon. The next step is to compute the following ratio. We're going to call this alpha. It is this g function evaluated at the candidate divided by the distribution, or the density here of q, evaluated at the candidate given the previous iteration. And all of this will be divided by g evaluated at the old iteration. That divided by q, evaluated at the old iteration. Given the candidate value. If we rearrange this, it'll be g of the candidate times q of the previous value given the candidate divided by g at the previous value. And q evaluated at the candidate, given the previous value.....
Рекомендации по теме
Комментарии
Автор

Can't believe how good your backwards handwriting is ;)

sethjchandler
Автор

This is a pretty nice video thank you so much!!! I feel like I understood a lot of things much better after listening to the video. But there's also homework to prepare for if one wants to fully appreciate the content in the video. One should understand mostly about Bayesian statistics, and terms like normalizing factors. If not, checking around Wikipedia might be a good option.

justinxia
Автор

Thank you. Great set you got there for presenting it, almost convinced me you're actually left-handed lololol

ecapanema
Автор

explained everything super clear, thank you

cindyxu
Автор

Very nice and conscise presentation, thanks!

MiikaMakiVL
Автор

Brilliant explanation!!! this is the best one

gihanna
Автор

Great video! Still wondering, can you perhaps further explain why the terms that cancel are equal for a normal distribution at 7:21? Can't wait for the next video coming out!

freezeandfry
Автор

Could you explain the intuitions behind the formula for alpha?

jacobmoore
Автор

are you writing backwards?! very nice video, thank you for sharing.

thefarrellgroup
Автор

can we have an example of programming this example on Matlab.

abdelkarimaths
Автор

Dear Professor, what is "q" here?

mohammadghani
Автор

I still did not understand. I will wait for the next video. Thank you anyway !

JohnSmith-lfxm
Автор

You miss an example. Without example this looks like science fiction

donpkchannel
Автор

Nothing more than repeated the Wiki, you didn't specify what's theta_0, how the "draw" candidate, not even one example.

CW-peci