Machine Learning 3.2 - Linear Discriminant Analysis (LDA) and Quadratic Discriminant Analysis (QDA)

preview_player
Показать описание
We will cover classification models in which we estimate the probability distributions for the classes. We can then compute the likelihood of each class for a new observation, and then assign the new observation to the class with the greatest likelihood. These maximum likelihood methods, such as the LDA and QDA methods you will see in this section, are often the best methods to use on data whose classes are well-approximated by standard probability distributions.

Рекомендации по теме
Комментарии
Автор

Thank you for your explanation. I also think at 8:15 the multivariate normal distribution's probability density function should have $\sqrt{|\Sigma|}$ in the denominator (rather than $|\Sigma|$ as you have currently) and it also may be helpful to viewers to let them know that $p$ represents the dimension of the space we are considering

SamHere
Автор

This beats my MIT lecture. WIll be coming back for more!

gingerderidder
Автор

A very good and concise explanation, even starting with the explanation of likelihood. Very well done!

Spiegeldondi
Автор

I enjoyed watching your video, thank you. I will watch more of your videos on machine learning videos thank you!

neftalisalazar
Автор

Thanks for this! I needed to clarify these methods in particular, was reading about them in ISLR

lizzy
Автор

Interesting and clear explanation! Thank you very much, this will help me in writing my thesis!

JappieYow
Автор

i was trying to read it my self but you made it so much simpler

ofal
Автор

10:48 I was just going back and forth between the sections on LDA and QDA in three different textbooks (An Introduction to Statistical Learning, Applied Predictive Analytics, and Elements of Statistical Learning) for well over an hour and that multivariate normal pdf was really throwing me off big time. Mostly because of the capital sigma to the negative 1st power term, I didn't realize it was literally a capital sigma, I kept thinking it was a summation of something!

spencerantoniomarlen-starr
Автор

Good job. It is very easy to follow and understand

Dhdhhhjjjssuxhe
Автор

Awesome lecture, thank you professor!

huilinchang
Автор

Thankyou so much ! Cleared a lot of my doubts

vihnupradeep
Автор

can you share these slides in the videos with me?

jaafarelouakhchachi
Автор

Very great video! Thank you professor!! :)

geo
Автор

Very useful information, thanks you professor!

mwvites
Автор

Hi! If the classes are assumed to be normally distributed, does that subsume that the features making up an observations are normally distributed as well?

kaym
Автор

How do you get the values of 0.15 and 0.02? I'm getting different values.

saunokchakrabarty