Posterior & MAP for Normal distribution with unknown precision

preview_player
Показать описание

We have seen it a couple of times now that the Maximum Likelihood Estimate (MLE) is quite sensitive to outliers. If we have a noisy or corrupt dataset (which is common in the application of Machine Learning) the resulting estimate can be of bad quality.

We therefore have to encode prior knowledge, or in other terms: we have to include regularization in our estimation. In this video, we have a dataset of normally distributed values, think for instance of measured diameters of drilled holes. We know the mean of the distribution for sure, but we are unsure about the standard deviation/variance/precision of our measurement (and hence also the holes' diameters).

In order to create a regularized estimate, we put prior knowledge on our precision. We will work in terms of the precision, as this simplifies our analysis. This allows us to derive a posterior and a MAP estimate.

By the help of an example in TensorFlow Probability, we check our findings and find an explanation as for why we need regularization.

-------

-------

Timestamps:
00:00 Introduction
00:54 Motivation: Sensitive MLE
01:13 Directed Graphical Model
03:17 The joint distribution
06:51 Posterior by Bayes' Rule
08:25 Deriving the Posterior
10:19 Identifying the Posterior
11:45 Discussing the Posterior's Parameters
13:33 MAP estimate derivation
18:24 Discussing the MAP
19:06 MAP for standard deviation
19:49 Discussing influence of Hyper-Parameters
21:12 TFP: Create a dataset
22:25 TFP: Fixing the Mu
23:04 TFP: MLE
23:44 TFP: Defining Hyper-Parameters
24:14 TFP: MAP
25:34 TFP: MLE vs. MAP
26:03 TFP: Posterior's Parameters
26:55 TFP: Creating the Posterior
27:12 TFP: Mode of the Posterior
28:09 TFP: Corrupt data
29:59 Outro
Рекомендации по теме