Machine Learning Lecture 14 '(Linear) Support Vector Machines' -Cornell CS4780 SP17

preview_player
Показать описание
Lecture Notes:
Рекомендации по теме
Комментарии
Автор

Exactly what I was looking for. I searched a lot, as usual started with CS229 (Stanford), moved to AI (MIT), but this by far is the most precise explanation of SVMs (particularly the maths behind it). Thanks.

pvriitd
Автор

One of the best machine learning course I've have ever attended. "Maximization is for losers" is a gem! 😂. Thank you very much Professor Kilian, you're great

andreamercuri
Автор

I always reflexly raise my hand whenever he says "raise your hand if you are with me". Thank you so much Professor. These videos are treasure trove. I wish I was there in the class.

vaibhavsingh
Автор

The quality of these lectures is so good that I hit the "Like" button first and then watch the video. Thank you Prof.

nikhilsaini
Автор

Using this to supplement Cornell's current CS 4780 professor's lectures and I'm finding them to be more helpful tbh. Excellent quality and passion.

msut
Автор

The best course on SVM. Thank you Kilian.

hamzaleb
Автор

Amazing lectures. So well explained and great humour. Thank you!

florianellsaesser
Автор

The best SVM explanation and derivation that I have ever found on Youtube

Went
Автор

The best of SVM, I have ever seen derived by anyone

darshansolanki
Автор

Can't thank you enough for these lectures Professor Weinberger!

hamzak
Автор

it would be great if you make the course assignments available to the youtube audience as well. thanks a lot for the video, as always its fabulous:)

saikumartadi
Автор

Ahh..Felt like watching a movie. Best intro to SVM on YT

SAINIVEDH
Автор

Dear Kilian, please share other courses that you teach. It is a wonderful resource.

inseconds
Автор

Your lesson clear my understanding about SVM.
Thanks you so much.

khamphacuocsong
Автор

6:00 it may be the case that the notation have changed, but there is no such thing (at least as of today) as (y - Xw)^2, since we cannot square the vectors, it should be ||y - Xw||^2, the squared l2-norm.

23:00 the method of finding the length between the point and the hyperplane was very clever, but nominator should be the absolute value of w.Tx+b since the distance must be always positive. I think that there is a simpler method, it requires more linear algebra so it may be the case that Professor took this approach.

37:10 i have denoted it as "brilliant move !!" in my notes!

prwi
Автор

Very good lecture, many thanks for the explanations as well for the humor :D
Could you please share the demo of this class?

imedkhabbouchi
Автор

Dear Professor. At 35:40 you talk of the 'trick' to rescale w and b such that MIN |wx + b| = 1 (over all data points). Is it not more accurate to state that we do not rescale w and b for this trick, but rather that we choose b such that our trick works? The outer maximization changes w to minimize the norm and thus the direction of the decision boundary, while our value for b is such that MIN |wx + b| = 1. With direction above I mean that w defines the decision boundary (perpendicular to it) while b can only make the decision boundary move in a parallel direction.

I hope I have explained myself clearly. Thank you for your lectures!

abs
Автор

I have to ask though, do support vector machines still find much application today, since they are outclassed on structured data by ensemble methods and even on unstructured data, deep learning outperforms them.

kunindsahu
Автор

in margin equation, in denominator how the legnth of w it become wTw without square root

MrSirawichj
Автор

Awesome Lecture.. Sir, the link mentioned for Ben Taskar's notes on your webpage is not working.

saquibmansoor