Laplace Smoothing in Naive Bayes || Lesson 50 || Machine Learning || Learning Monkey ||

preview_player
Показать описание
#machinelearning#learningmonkey

In this class, we discuss Laplace Smoothing in Naive Bayes.
To understand Laplace Smoothing in Naive Bayes. first, we have to understand what Laplace smoothing.

Take an example and understand what Laplace smoothing means.

Take a football team named team1.

Team1 played with Team2 and it lost the match.

Team1 played with Team5 and it lost the match.

Team1 played with Team6 and it lost the match.

Team1 played with Team7 and it lost the match.

Team1 played with Team8 and it lost the match.

Team1 played with Team9 what is the probability of team1 win the match?

How we calculate probability?

P(Win) = No of matches won/No of matches won + No of matches not won

P(win) = 0/(0+5).

So the probability is given zero.

Is it reasonable to give probability zero because it lost all the matches before?

We have to give a very small probability.

To obtain a small probability we use Laplace smoothing.

Add one to every count.

P(win) = 0+1/ ((0+1) + (5+1))

P(win) = 1/7.

The same logic we apply to our naive Bayes model.

P(word|Ck=1) = Number of positive feedbacks contains the word/total no of positive feedbacks.

If the word in testing data is not found in any of the positive feedback.

Then the probability is zero.

It's not reasonable to give probability zero for a word.

Probability zero means the word is not belonging to the positive class.

How can we say that it does not belong to the positive class?

So add Laplace smoothing to the probability.

we give a small value to the word that not found in a positive class.

The same applies to the negative class also.

P(word|Ck=1) = Number of positive feedbacks contains the word + alpha/total no of positive feedbacks + alpha*k.

Alpha can be of any value.

Usually, alpha is taken 1.

K is the number of classes.

In our case K = 2.

How to find the best alpha is explained in our next class.

Link for playlists:

Рекомендации по теме
Комментарии
Автор

Past results are not a predictor of future and hence the need for Laplace smoothing allowing probability close to real life uncertainty. Nice video. Thanks.

dhavalvyas
Автор

You should gain more popularity given how good your explanation is. Kudos !

PulseQuizz
Автор

Very well explained. Thank you very much for this video, you make it seem easy!

gandalfbaggins
Автор

Loves the way you teach. You explain complex ML algos and techniques in just 8-10 min video with such an ease that I'm fascinated about. Please keep uploading videos.
Just a small request, please upload videos on Dimensionality Reduction techniques (like PCA, LDA, SVD, t-SNE, etc.), Expectation Maximisation, RL, etc also.

shubhamchaudhary
Автор

Sir, if I apply laplacian smoothing, should I need to apply to all the other features in the dataset as well or only for the feature with zero probability?

keerthanasamuthiram
Автор

Sir, please make a video on how to do laplace smoothing in R software

thejuhulikal