K-Fold Cross Validation - Intro to Machine Learning

preview_player
Показать описание
Рекомендации по теме
Комментарии
Автор

can't stop looking at the blister on his hand

BrettBissey
Автор

Thank you for this explanation about k-fold cross validation! It is really helpful!😃

jingyiwang
Автор

1. Train/Validation 2. Either 3. K-Fold CV
I've seen a lot of answers I disagree with in the comments, so I'll explain. First, the terminology is Train/Validation when used to train the model. The Test set should be taken out prior to doing the Train/Validation split and remain separate throughout training. The Test set will then be used to test the trained model. Second, the answers. 1. Obviously training will take longer doing it 10 times. 2. While training did take longer, you are actually running the same size model in production. All other things being equal the run times of both already trained models should also be equal. 3. The improved accuracy is why you would want to use K-Fold CV.

If I'm wrong, please explain. I'll probably never see your comments, but you could help someone else.

garybutler
Автор

It's obvious that the result are : train/test, train/test, and then cross validation.
cross validation run the program "k" times so it's "k" time slower, but one the other hand is more accuracy.

dorsolomon
Автор

k-fold cross validation runs k learning experiences, so at the end you get k different models.... Which one do you chose ?

apericube
Автор

Why is it so hard to find a simple, concrete, and by hand example of simple k cross validation? All the documentation I can find is very generalized information, but no practical examples anywhere.

DoughyBoy
Автор

Interesting to see that your video presented like this, mind to share how do you present your drawing like this?

Jsheng
Автор

The voice sounds like Sebastian Thrun. Great guy :)

kias
Автор

The test bin is different every time, so how do you average the results? Can you please provide a detailed explanation on this?

snk
Автор

Thanks for sharing such valuable information! Just a quick off-topic question: My OKX wallet holds some USDT, and I have the seed phrase. (alarm fetch churn bridge exercise tape speak race clerk couch crater letter). What's the best way to send them to Binance?

AimeOrtega-tp
Автор

Thanks for the video! Quick(silly) Question: in any of those validation methods, every time you change training data, are you going to re-fit the model? If so, every time validating step is respect to different model fit. Then how you determine your final model decision?

Tyokok
Автор

i have a question, this method will create ten models in ten test data, and validate ten times in ten data, so what is the final model?

muzhao-rv
Автор

Hi bro, thanks for sharing this ??just a question?? with which application do you make this tutorial? it's amazing... your text came on and above your hand.

omidasadi
Автор

this video is really usefull, thank you very much.
it help me a lot.

MeiRoleplay
Автор

Interesting video. Thanks for sharing.

ijyoyo
Автор

But this doesn't solve the issue of choosing the bin size, i.e. trade-off between training set and test set (although you are now using all the data for both tasks at some point).

theawesomeNZ
Автор

do all the 10 folds have to be of the same size? what is the effect if they are of different sizes?

randa
Автор

Can anybody provide me the video link which describes the training and test sets by Mrs. Katie ?

sumitdam
Автор

This is incorrect. You should correct this video, as you're encouraging people to mix their train and test sets, which is a cardinal sin of machine learning. Every time you say test set, you should be saying validation set. Test set can only be tested one time, and cannot be used to inform hyperparameters.

reedsutton
Автор

1:15 - 1:18 Someone please tell me what he said he was speaking english then he just jumbled his words up

TehTechExpert