Support Vector Machines (SVMs): A friendly introduction

preview_player
Показать описание
For a code implementation, check out this repo:

40% discount code: serranoyt

An introduction to support vector machines (SVMs) that requires very little math (no calculus or linear algebra), only a visual mind.
This is the third of a series of three videos.

0:00 Introduction
1:42 Classification goal: split data
3:14 Perceptron algorithm
6:00 Split data - separate lines
7:05 How to separate lines?
12:01 Expanding rate
18:19 Perceptron Error
19:26 SVM Classification Error
20:34 Margin Error
25:13 Challenge - Gradient Descent
27:25 Which line is better?
28:24 The C parameter
30:16 Series of 3 videos
30:30 Thank you!
Рекомендации по теме
Комментарии
Автор

Best explanation of SVM on YouTube. Keep up the good work.

hichamsabah
Автор

Hey Louis, I have recently come across your videos and I am blown away by your simplistic approach to delivering the mathematics and logic especially the mention of the applications. A quick one, DO YOU TAKE STUDENTS, I WOULD LIKE TO ENROLL. I have more interest in analysis of biological data and o rarely find as much good video as this. I'm simply in love with your methods

ocarerepairlab
Автор

As always, very nicely and easily explained. Looking forward to seeing your explanation about PCA, TSNE and some topics of Reinforcement Learning.

mohammedhasan
Автор

Thank you very much for this amazing video. I have come across your channel only recently and I do like your way of explaining these complicated topics.

I have got two (hopefully not too dumb) questions regarding SVMs:

Given the similarity of SVMs and logistic regression, would it be a good idea to start from an LR-result instead of a random line?

Did I understand correctly, that the distance between the two lines can only increase during the search for the best solution? Wouldn't it be conceivable that at some point the combined error function decreases by decreasing the distance between the lines?

gammaturn
Автор

Can I ask that step of separating line is just only for optimizing the model, right? Like in the case when you have 2 lines have already separated the training data, so you expand the line to see how wide they are?

dante_calisthenics
Автор

Hi Luis, I like your youtube video animations, they are great! Can I know what software you use for animations?

scientific-reasoning
Автор

SVM are a non parametric algorithm, and you are explaining it as a parametric algorithm.... are you sure that this is the way to go?... SVM usually deals with matrix of Alphas to determine the support vectors, to have the best separation, however, in your algorithm, you are always multiplying abc by 0.99 which makes no sense, why you should have a wider range every time? (abc*0.99 each iteration will cause abc to become smaller and smaller, and so the boundaries will become wider and wider)

bertobertoberto
Автор

22:00 can anyone derive that expression?

KoreaRwkz
Автор

The best Machine learning / Deep learning I've learnt from.

naps
Автор

This is a great explanation of the concepts, it helped me.
But isn't this video about the Support Vector Classifier and not the SVM (which uses kernelization)?
The SVC uses the maximal margin classifier, with a budget parameter for errors, and the SVM uses the SVC in an expanded feature space made by kernelization.

drewlehe
Автор

I think SVM's loop should use one line ap+bq+c-1>0 for blue points and another line ap+bq+c+1<0 for red points. Otherwise, parallel lines are not used in the SVM algorithm.

macknightxu
Автор

Just want to leave a comment so that more people could learn from your amazing videos! Many thanks for the wonderful and fun creation!!!

blesucation
Автор

You made the SVMs look like a walk in the park. Thoroughly enjoyed this as I enjoyed your Math for ML specialisation in CoursEra.

tangledweb
Автор

Visual, thorough, informal — perfect!

JohnTheStun
Автор

In the pseudo algorithm of svm, in the last step we multiply a, b, c by 0.99 then even the right hand side should be multiplied by 0.99 making the right hand side to 0.99 and not 1. Am I missing something?

anujshah
Автор

Great tutorial. (16:23) "if point is blue, and ap + bq + c > 0", I think the equation should have BLUE color (to indicate the BLUE dash on the graph) rather than RED. Similarly, "if point is red, and ap + bp + c < 0", the equation should be RED (to indicate the RED dash on the graph) instead of BLUE. Pardon me if I am wrong.

xruan
Автор

Thank you for the good explanation. However, I miss some introductions. What is its added value compared to Logistic Regression? And some recommendations on when to prioritize this algorithm against other...

Pulorn
Автор

This is terrifying omg. You approach it perfectly and all the math behind just guide me to the point that I have to say WOW! Such a good observation, this video is by far golddd. I love your approach at 22:56 so much, you guide me to that point and say, that's the regulization term and I was omg wtf is happening, that's what I was trying to understand all this time and this guy, you, just explain it in a few minutes. Really appreciate <3. I subcribed

nguyenbaodung
Автор

16:36, Multiply a, b, c by 0.99, so in the loop, 0.99ap+0.99bq+0.99c is the same with ap+bq+c, so is 0.99 multiply senseless?

macknightxu
Автор

Thank you, this is fantastic! Your visual explanations are great, they’ve really helped understand the intuition of these techniques.

JimmyGarzon