filmov
tv
Laplace Smoothing in Naive Bayes || Lesson 50 || Machine Learning || Learning Monkey ||

Показать описание
#machinelearning#learningmonkey
In this class, we discuss Laplace Smoothing in Naive Bayes.
To understand Laplace Smoothing in Naive Bayes. first, we have to understand what Laplace smoothing.
Take an example and understand what Laplace smoothing means.
Take a football team named team1.
Team1 played with Team2 and it lost the match.
Team1 played with Team5 and it lost the match.
Team1 played with Team6 and it lost the match.
Team1 played with Team7 and it lost the match.
Team1 played with Team8 and it lost the match.
Team1 played with Team9 what is the probability of team1 win the match?
How we calculate probability?
P(Win) = No of matches won/No of matches won + No of matches not won
P(win) = 0/(0+5).
So the probability is given zero.
Is it reasonable to give probability zero because it lost all the matches before?
We have to give a very small probability.
To obtain a small probability we use Laplace smoothing.
Add one to every count.
P(win) = 0+1/ ((0+1) + (5+1))
P(win) = 1/7.
The same logic we apply to our naive Bayes model.
P(word|Ck=1) = Number of positive feedbacks contains the word/total no of positive feedbacks.
If the word in testing data is not found in any of the positive feedback.
Then the probability is zero.
It's not reasonable to give probability zero for a word.
Probability zero means the word is not belonging to the positive class.
How can we say that it does not belong to the positive class?
So add Laplace smoothing to the probability.
we give a small value to the word that not found in a positive class.
The same applies to the negative class also.
P(word|Ck=1) = Number of positive feedbacks contains the word + alpha/total no of positive feedbacks + alpha*k.
Alpha can be of any value.
Usually, alpha is taken 1.
K is the number of classes.
In our case K = 2.
How to find the best alpha is explained in our next class.
Link for playlists:
In this class, we discuss Laplace Smoothing in Naive Bayes.
To understand Laplace Smoothing in Naive Bayes. first, we have to understand what Laplace smoothing.
Take an example and understand what Laplace smoothing means.
Take a football team named team1.
Team1 played with Team2 and it lost the match.
Team1 played with Team5 and it lost the match.
Team1 played with Team6 and it lost the match.
Team1 played with Team7 and it lost the match.
Team1 played with Team8 and it lost the match.
Team1 played with Team9 what is the probability of team1 win the match?
How we calculate probability?
P(Win) = No of matches won/No of matches won + No of matches not won
P(win) = 0/(0+5).
So the probability is given zero.
Is it reasonable to give probability zero because it lost all the matches before?
We have to give a very small probability.
To obtain a small probability we use Laplace smoothing.
Add one to every count.
P(win) = 0+1/ ((0+1) + (5+1))
P(win) = 1/7.
The same logic we apply to our naive Bayes model.
P(word|Ck=1) = Number of positive feedbacks contains the word/total no of positive feedbacks.
If the word in testing data is not found in any of the positive feedback.
Then the probability is zero.
It's not reasonable to give probability zero for a word.
Probability zero means the word is not belonging to the positive class.
How can we say that it does not belong to the positive class?
So add Laplace smoothing to the probability.
we give a small value to the word that not found in a positive class.
The same applies to the negative class also.
P(word|Ck=1) = Number of positive feedbacks contains the word + alpha/total no of positive feedbacks + alpha*k.
Alpha can be of any value.
Usually, alpha is taken 1.
K is the number of classes.
In our case K = 2.
How to find the best alpha is explained in our next class.
Link for playlists:
Комментарии