Machine Learning Lecture 15 '(Linear) Support Vector Machines continued' -Cornell CS4780 SP17

preview_player
Показать описание
Lecture Notes:
Рекомендации по теме
Комментарии
Автор

Not sure why this playlist doesn't come on top when one searches for ML on youtube. Andrew Ng might be a good researcher but he's not a very good teacher. Killian teaches in such a good manner, that I never once felt bored of felt as if I'm studying. Thanks Killian, You're a Gem.

saransh
Автор

I am glad that I choose your course to learn the theory of ML. May God bless you and your family! Thank you!

Anuarlogon
Автор

Thanks a lot for making this high quality content publicly available - much appreciated!

matthieuglotz
Автор

Your natural intelligence is trained to check for mic before every class after failure in one :):)

kirtanpatel
Автор

cool man, you shows by graph how excellent the SVM is :D:D:D very clear and vivid graph.

gregmakov
Автор

Thanks for the awesome lectures! We have a weekly "book club" of Cornell MAE alumn teaching ourselves ML based on your course. We've all been wondering where you get your sweaters?

lawrencelenkin
Автор

I want to know about svm hinge loss function and svm loss function more.. does anyone know a good recourse?

ehfo
Автор

Hi Prof
I have 2 doubts:
1- Does SVM outputs only 1 hyperplane? I guess some planes passing through midpoint of line joining support vectors also satisfies all equations.
2- In your lecture notes on changing the value of "c" why only magnitude of "w" is changing and no change in the direction of "w" is observed?

ayushmalik
Автор

Why are we considering wTw as the regularizer in the hinge function when originally wTw was being minimized (without considering any slack).
Shouldn't the introduction of slack be the regulariser here

uddishnegi
Автор

Is margin defined differently in the case of svm with soft constraints, i.e. whether the points which don't satisfy y_i (transpose(w) * x_i +b) > = 1 are considered while calculating margin (which is min d (where d is distance of each point to a given hyperplane)) ?

sudhanshuvashisht
Автор

why does W get smaller as C decreases?

ehfo
Автор

logloss is the (negative) loglikelihood of P(Y|X) (i think). Do the other losses correspond to some modeling of the P(Y|X) as well?

deltasun