Maths Intuition Behind Support Vector Machine Part 2 | Machine Learning Data Science

preview_player
Показать описание
In machine learning, support-vector machines are supervised learning models with associated learning algorithms that analyze data used for classification and regression analysis.
Please join as a member in my channel to get additional benefits like materials in Data Science, live streaming for Members and many more

Please do subscribe my other channel too

If you want to Give donation to support my channel, below is the Gpay id

Connect with me here:

Рекомендации по теме
Комментарии
Автор

I'm glad someone like you decided to make a video on this. I have found that many find SVM hard to grasp because they dive directly into the code without understanding the intuition behind it. This goes a long way in helping people out.

varuag
Автор

My Dear Teacher
From my heart, I salute you, cause you to work so hard for us, your students, to teach the things with so much clarity. Praying for you.
....Noushad Rahim, Kerala

myeschool
Автор

I cannot begin to thank you enough for breaking down and simplifying the math behind the machine learning algorithms. Understanding the math under the hood is pertinent to tuning the hyperparameters. I love your videos and I'm always recommending aspiring data scientists to check out your channel.

victor
Автор

You really are too passionate about teaching sir. Sometimes you even are being breathless, that excitement of teaching..got no words. Hope you will upload videos on topics related to DNN as well. Proud to be learning from you

abhijitbhandari
Автор

man how could you remember all this... I keep forgetting the concepts after few weeks and had to watch it back to get a grasp on it.
A million thanks for you in sharing your precious knowledge with us.

YouTubelesss
Автор

a sensible tutor i have seen my life who always traces students pulse . there are lot of tutorials about svm in youtube but no other covered A to z as like as krish . appreciate u krish

mahikhan
Автор

The way you simplify things is really commendable. After reading lot of blogs and going through other resources finally landed here and it was worth it. Thank you Sir.

akshaykhavare
Автор

Very informative video and simple to understand. A slight oversight error, as the X here is 2-d [x1, x2], W (without b) must be 2-d as well [w1, w2]... If we consider bias b, then X = [x1 x2 1] and W is [w1 w2 b]... and in such a case we shall have a plane instead of a line

datahat
Автор

Thanks Sir, For all of yours video agr aap nhi hote to kbhi v itna sikh nahi pate humare instute wale log sirf overview bata k chor diye but real knowledge to aapki video se Thank You So Much

prashantkumarvishwakarma
Автор

Thank you so much Sir, the way you were teaching and i was getting all of your points, my love for your method and dedication starts hiking, a lot of love, respect and salute from Pakistan... knowledge have no boundries...

usamaahmad
Автор

Just Great !!! Wow!!! - It was a great experience. Eagerly waiting for the part 3 of SVM covering the kernel trick.

jbhsmeta
Автор

The Regularization Parameter C is basically how much we want to avoid misclassification of points. If C is very large (infinite) we get the perfect classification of training samples smaller margin is considered, but if C is very small(0) it will cause the optimizer to find the maximum margin classifier even though it misclassified some points. Hence we have a find a good value of C in between.
The Gamma parameter defines how much influence a training example has. For example, if gamma is high only the nearest points from the margin are considered for calculating distances, but if gamma is low even farther points from the margin are also considered.

rvg
Автор

I can't wait to express my infinite appreciation for you, sir! This video is so so so intuitive and uses less advanced math!

wenqichen
Автор

Abhi tk kahan the sir, I was searching a teacher like you in ML. Finally mission completed. Love from my side.

SumanBhartismn
Автор

Thank you so much Bro and I like you so much bro and Your individuality is seen through everyday and every videos I will become a data scientist one day ...

rajraji
Автор

The w matrix should be [1 1] because the line equation is x1 + x2 =0, also while computing the value of y, wT should be a dimension of 1x2 and X should be 2x1, so that you will get a single value.

kirushikeshdb
Автор

Kernel have sigmoid s shape graph n linear and polynomial form
I m from statistics degree u have grt knowledge dude keep it up

darshitsolanki
Автор

Hey Krish,

I just want to say that your explanations are superb. I am new to Machine Learning and I took an online course about it but it barely gets into the mathematics. I understand that to get good and serious at ML we need a solid mathematical understanding of the various models, so i appreciate these videos that go in depth.

To be honest I watched it the first time and didn't completely get it, but I'm going to watch it again now!

andrewwilliam
Автор

Very very impressive explanation.. Thanks a lot. Bhagwan aapko hmesa khus or swasth rakhe....

rupeshsingh
Автор

I saw so many articles about svms, every one say directly distance formula simply maximize, but your simplification from strach is awesome sir !!!!

sivareddynagireddy