Hands on Machine Learning - Chapter 7 - Ensemble Learning and Random Forests

preview_player
Показать описание
An overview of Chapter 7 of the book Hands-on Machine Learning with Scikit-Learn Keras & Tensorflow

Hardware:

Рекомендации по теме
Комментарии
Автор

You're doing a great job at what you do. It's really helpful when we can actually code along with you. Let's just hope for more of that type of content. Cheers.

ashmalvayani
Автор

Solid videos. I'm following you as I read the book, and it's been incredibly helpful.

wilrivera
Автор

thank you so much bro, I have been following the series from the start, It's so cool and lit.

elishabulalu
Автор

Yeah your tutorials are good and easy to understand i recommend these to those who are new to machine learning and need more of a roadmap well done

alexiojunior
Автор

I'm gonna follow you from now on. ❤

adrinemarieadonis
Автор

Man, your work helps me a lot! Thank you!

pedroabranches
Автор

Thanks for all the effort and knowledge given on the videos. I do have one questions on min 10:00 of the video you scale the data after it was one-hot-encoded, does it mean that the binary columns get now centred on 0 and variance 1? this would give us values different than 0 and 1, but values ranging from 0 an 1? is that the way to do it?

gabrielfreire
Автор

Shashank, have you heard about Applied AI course (AAIC) ? Do you have some tips or feedback/opinions..

sehaj
Автор

Isn't bagging/pasting similar to what we do in K-fold cross validation?
Since, both use different subsets of the training data in fitting the model, i.e., k fold cross validation is just another case of bagging/pasting, right?

pranavgupta
Автор

bahaha the distraction caused by the porsche 911

fatimak
Автор

why you not complete deep learning part

omaralkhasawneh
Автор

commenting for the YoutTube (ML) algo :)

fatimak
Автор

12:00 Just wanted to point out that your explanation with bagging and pasting is not entirely right.

Bootstrapping means sampling with replacement. It means you take one sample, put it back and draw another sample until the desired sample size is reached. Bootstrap aggregating or bagging means doing this multiple times with mutilple predictors or classifiers.

Whereas for pasting, you sample, don't put it back and sample again until the desired sample size is reached.

The way you explained it was you would sample a certain subset and train on it before replacing it which is not how bootstrapping works. Your explanation would mean that the probability of a sample getting drawn would vary as you will only be replacing the entire subset at the end. In bootstrapping however, the probability of a particular sample being drawn would be equal because after every draw, you will be repacing the drawn sample.


I hope this clarifies things.

wilsvenleong
Автор

Scaling is not necessary for random forests

shaelanderchauhan
Автор

@Shashank Kalanithi I know it sounds stupid what are you drinking?

pavlostsoukias
Автор

You have a data leakage because of encoding, FYI.

orioncloud
Автор

It was very poorly done. I was banking on your video and thought of subscribing and getting patreon but this was awefully done... Sorry cant sunscribe also...

anamikadassmanna