#41 Maximum Likelihood & Least Squared Error Hypothesis |ML|

preview_player
Показать описание
Рекомендации по теме
Комментарии
Автор

Best teaching skills ever, specially for all categories students, 🎊

sanjays
Автор

Thank you so much, I have completed my 4-1 Machine learning exam today, I followed only your videos for the exam, your content is amazing mam. Helped me 100%. Thank you once again🥺🥶

ThanushKumarVusa
Автор

UNIT - III
Bayesian learning – Introduction, Bayes theorem, Bayes theorem and concept learning, Maximum
Likelihood and least squared error hypotheses, maximum likelihood hypotheses for predicting
probabilities, minimum description length principle, Bayes optimal classifier, Gibs algorithm, Naïve
Bayes classifier, an example: learning to classify text, Bayesian belief networks, the EM algorithm.
Computational learning theory – Introduction, probably learning an approximately correct hypothesis,
sample complexity for finite hypothesis space, sample complexity for infinite hypothesis spaces, the
mistake bound model of learning.
Instance-Based Learning- Introduction, k-nearest neighbour algorithm, locally weighted regression,
radial basis functions, case-based reasoning, remarks on lazy and eager learning.

govardhanreddy
Автор

u are explaining ML very effectively i always try to understand i am not data minning side so feeling difficult

saipavanig
Автор

Wonder ful explanation really worth watching ur video till end... without skipping..am in final year!!

sarveshnaik
Автор

Mam please complete video series soon and also post important questions of ml mam please

kammelaaradhan
Автор

Thank you mam.. For Helping a poor students❤

pannagabm
Автор

Excellent explanation mam...can't thankyou enough

sindhujareddy
Автор

all i can say is thanks a lot for saving my sem exam mam !! thank you sooo muchhhh

udaykumarn
Автор

at 5:12 wont it be h(di) rather than h(xi) we havent defined xi, when we change di in place of x, so should μ be changed to h(di). correct me if wrong!....

rithanyabalamurali
Автор

Very good teaching my dear sister.All the best for your bright future 😊

vsyadav
Автор

Marri Laxman reddy government collage Dundigal

vamshireddy
Автор

appreciate your efforts mam thank you so much for nice explanation and presentation

tammanakarthikeya
Автор

Thank you very much for your Content !

harendrakumar
Автор

Very clearly explained, thank you so much, u have helped me a lot😊

vindyasemith
Автор

You defined p(d/h) is p(di/h)
But when we look into hmap it is p(h/d) right

johnniejyothish
Автор

thank you ma'am . love your teaching 😍

pardhi
Автор

Mam can we know which text book are you following, because there is only one textbook mentioned in our jntuh syallbus and content is little bit different from yours

divitisaitezaa
Автор

Mam can u explain ids jntuh my exams at March 4th plz help

futureworld-hh
Автор

Please make a video on Maximum likelihood hypothesis for predicting probabilities. Exams are approaching. Please make it fast.

chaithranagaraju