How to find the Entropy and Information Gain in Decision Tree Learning by Mahesh Huddar

preview_player
Показать описание
How to find the Entropy and Information Gain in Decision Tree Learning by Mahesh Huddar

In this video, I will discuss how to find entropy and information gain given a set of training examples in constructing a decision tree.

entropy nptel,
entropy explained,
entropy data mining,
entropy data mining example,
entropy based discretization example data mining,
entropy machine learning,
entropy machine learning example,
entropy calculation machine learning,
entropy in machine learning in hindi,
information gain decision tree,
gain decision tree,
information gain and entropy in the decision tree,
information gain,
information gain and entropy,
information gain feature selection,
information gain calculation,
information gain and Gini index,
information gain and Gini index,
information gain for continuous-valued attributes
Рекомендации по теме
Комментарии
Автор

HI sir this is best explanation i have seen so far. in my Datascience i shared this link to my friends . thanks for your support and
in Issues in Decision Tree Learning. can u pls post the ans for last two questions ( handling arributes with different costs
and
alternative measures for selecting attributes )

terryterry
Автор

How root node depends on information gain of attributes.?.. Simply super Explanation..If you get questions from listeners.. You understand that they like your videos

sachinahankari
Автор

Thank you so much! I was able to submit my assignment in my masters because of this video

purplishjen
Автор

To achieve Entropy = 0.94, you need to divide the whole answer solved by taking ln X by ln 2, because log X base 2 = ln X / ln 2

shiva
Автор

your presentation is excellent and clear. thank you for making these videos available to everyone.

wryltxw
Автор

Clean and neat explanation Thank you sir

maheshm
Автор

I am a masters student of data science at a german university and this has helped me thoroughly for my ML exam! Thank you dear sir!

ashutoshmahajan
Автор

Thank you very much sir, completely cleared my doubt.

vighneshchavan
Автор

This was so easy to understand!! Thank You so much

loveparks
Автор

very lucid explanation. Thanks a ton Prof.

prateekmishra
Автор

Thank you so much very simply explained

avanishkamak
Автор

Can you explain this "One interpretation of entropy from information theory is that it specifies the minimum number of bits of information needed to encode the classification of an arbitrary member of S.

unique_bhanu
Автор

Ahh here i am revising for my final Data mining exam, wish me luck

blessaddo
Автор

since we are calculating entropy for a given dataset from one column only that is our classification, thus we could say entropy for a given dataset remains same when we calculate it for each column as no. of yes and no remains same? please answer fast

aarushgandhi
Автор

Hello Sir, awesome job on this video.

EndlesRidge
Автор

Reply please
I have some doubt on how to calculate the term log base 2 . How to do this calculation. Means 9/14 log base 2 9/14 - How to achieve the answer like 0.94

husnain
Автор

How to calculate gain and entropy of continuous data?
Is it possible to do so ?

bollytainment