ML Teach by Doing Lecture 7: Linear Classifiers Part 2

preview_player
Показать описание
Welcome to Lecture 7 of Machine Learning: Teach by Doing project.

In this lecture, we run our first ML algorithm: The Random Linear Classifier. We learn about hyperparameters, cross validation and many more exciting concepts!

0:00 Introduction
5:20 Random Linear Classifier algorithm
11:40 Parameters vs Hyperparameters
14:47 Result validation
22:47 Cross validation
31:45 6 ML steps recap
29:00 Loss function
34:15 Conclusion

(a) Notes and PPT PDF which was shown in the video:

=================================================

Machine Learning: Teach by Doing is a project started by the co-founders of Vizuara: Dr. Raj Dandekar (IIT Madras Btech, MIT PhD), Dr. Rajat Dandekar (IIT Madras Mtech, Purdue PhD) and Dr. Sreedath Panat (IIT Madras Mtech, MIT PhD).

In 2018, Dr. Raj Dandekar attended his first ML lecture at MIT and it transformed his life. The next four years: He mastered ML, published ML research, did ML internships and corporate jobs, and finally obtained his ML PhD from MIT.

Machine Learning: Teach by Doing is not a normal video course. In this project, we will begin learning ML from scratch, along with you. Everyday, we will post what we learnt the previous day. We will make lecture notes, and also share reference material.

As we learn the material again, we will share thoughts on what is actually useful in industry and what has become irrelevant. We will also share a lot of information on which subject contains open areas of research. Interested students can also start their research journey there.
Students who are confused or stuck in their ML journey, maybe courses and offline videos are not inspiring enough. What might inspire you is if you see someone else learning machine learning from scratch.
No cost. No hidden charges. Pure old school teaching and learning.

=================================================

🌟 Meet Our Team: 🌟

🎓 Dr. Raj Dandekar (MIT PhD, IIT Madras department topper)

🎓 Dr. Rajat Dandekar (Purdue PhD, IIT Madras department gold medalist)

🎓 Dr. Sreedath Panat (MIT PhD, IIT Madras department gold medalist)
Рекомендации по теме
Комментарии
Автор

one of the best video on ml foundations. no instructor touch fundamentals or building blocks of thinking in terms of machine learning or approach problem statement.

praulayar
Автор

Thank you for starting this virtual co-learning series. This is a very good approach to learn. Please keep it going.

MrGirishbarhate
Автор

Hello, I am from Sri Lanka, currently i am on lecture 7. BTW I started to learn Machine learning an year ago, I've never found a tutorial like this in YouTube, Coursera or Udemy. The best way to start learning Machine Learning....
Kuddos for your efforts 👌👌👌👌😊😊

myaltaccs
Автор

I'm so lucky i discovered this soo early. Truly under-rates as they say!

sanjeevhotha
Автор

this content is awesome raj, i have never made it this far in ML. Thank you very much

PavanKumar-zkob
Автор

I didn't understood this lecture when I watched it on first time, but after watching it again it was just mind blowing experience. Thank you

C__ADITYA_MOTE
Автор

My takeaway from this video:

1. Through finalizing the learning algorithm with the help of CVE, we choose the best possible value of K.
2. Now using that K, we will find the best hypothesis. (If best possible K came out to be 100 in the first step, then we will choose 100 random values of (x_1, x_2, x_3) and test it on test data and choose that particular hypothesis which has the least test error.)

Please correct me if I am wrong, sir.

KattamuriKowshiq
Автор

Excellent initiative. Please continue with the series in the same interactive pattern which helps learners like us from non CS background to focus for long and grasp the contents better. Lack of interactive content is a major folly aross all the major DS courses that are being offered these days across the board. Thanks again. 🙏

shayanchakraborty
Автор

I don't think I would have understood the difference between evaluating classifier performance and learning algorithm performance anywhere else. Thank you!

reshmithampy
Автор

Two queries 1. Training data set where do we get that from ? and CVE computation E1-E6 does that act on Training and Test data Pair so to speak?

rajubalasubramaniyam
Автор

Hello Raj,

First of all big kudos🎉 to your team, for the Idea of setting up Vizuara. Thank you for this supremely awesome lectures on ML which I have been searching for a long time. The way this series begins was tremendous and is quite relatable to many.

I have a question: I didn't get the difference between the hypothesis and algorithms.

Correct me if I am wrong: Upto step 3 it is like finding the best line that can separate dogs and cats with minimum loss function.

Can you explain how the algorithm fits in here as the line(hypothesis) was already generated?

PatanMasoodKhan-qi
Автор

Is the hypothesis the same as a classifier?
Also, in your explanation on cross-validation in this video, the data was split into training and test sets then the training set was split further into k chunks. But in your notes and the MIT course, the entire data was divided into k chunks. Can you please clarify that?

edumaba
Автор

How do we choose the training data and test data from the given Data set (D)?
should we do CVE analysis for choosing these also?

iqbal
Автор

The explanations are very informative
I had a doubt from the video ....
Query : Learning algorithm is basically the ML algorithm wherein we are using the same dataset, using it in many folds (n-fold method) and then knowing the best value of K .." This is done for knowing best K value for the same algorithm ?
Following the similar context, for measuring the classifier performance we need test error ?

anshumaangarg
Автор

Sir, is it possible to provide the digital whiteboard where you are writing?

MEHEDIHASAN-jcxm
Автор

K is random number and parameters are also randomly generated. I feel that error curve may not be smooth; error at k=10 can be lesser than k=100 due to randomness of k and thetas. How do we think from theoretical perspective?

govindsharma-unpx
Автор

Hello @vizura Raj,
thanks for the wonderful session.
I just have one question in this lecture. As you said hyper parameter is fixed value, but during CVE are we going to define fixed K value or set of K values...?

lakshman
Автор

When the value of K is high, the overall time complexity will be increased. How to decrease this complexity. Is there any way to decrease time complexity as well as get the best output?

Md.SaifMahamud
Автор

Let suppose I want to measure CVE, we select random value for random choices, then how do we find out this value is our last for the cve, in which our cve is minimum

itsamitnitrkl