All Type Of Cross Validation With Python All In 1 Video

preview_player
Показать описание
Building machine learning models is an important element of predictive modeling. However, without proper model validation, the confidence that the trained model will generalize well on the unseen data can never be high. Model validation helps in ensuring that the model performs well on new data, and helps in selecting the best model, the parameters, and the accuracy metrics.

In this guide, we will learn the basics and implementation of several model validation techniques, mentioned below:
#crossvalidation
Hold Out Validation
K-fold Cross-Validation.
Stratified K-fold Cross-Validation
Leave One Out Cross-Validation.
Repeated Random Test-Train Splits
Subscribe my vlogging channel
Please donate if you want to support the channel through GPay UPID,

Please join as a member in my channel to get additional benefits like materials in Data Science, live streaming for Members and many more

Connect with me here:
Рекомендации по теме
Комментарии
Автор

Dont forget to subscribe my vlogging channel to see motivation and Data Science Q&A videos.

krishnaik
Автор

We need more *All in 1 video* tutorials !! Great work as always !!

rog
Автор

Even in college days.. I dint go to class before 10 ..minutes.. But am here waiting even before 15 minutes... Hats off to you krish sir and sudhanshu sir... for DS and ML aspirants..

rhevathivijay
Автор

Thanks Krish for this video, i was looking for such content from long time, i had many doubts in K fold and LOOCV, finally all doubts got cleared.

sachinpathania
Автор

krish naik sir plss make a video on light gbm and catboost plsss sir apko dil se apna bada bhai maanta hun agar apne nhi banayi video toh main samaj jaoga ki aap mujko chota bhai nhi maante

satyamtripathi
Автор

please make a video on LR classifier trained with ABC algorithm

sunitaskitchen
Автор

When will you make video on Bert ? I recently completed watching your NLP playlist and now I am waiting for the bert session.

syedalinaqi
Автор

Explained everything like Butter. Superb Sir.

rambaldotra
Автор

Sir, in machine learning suppose if the model is overfitted & we apply k-fold cross validation on top of that model k-fold(10), then we get to see lots of variation in cross validation scores right ?.
Such as
Score = [ 40, 80, 70, 80, 50, 83.2, 81, 84
Means, here diffn between highest & lowest accuracy is ( 84 - 40 = 44) which is very high, means our model is overfitted & not capable to perform on unknown data situation, hence we need to regularise it &do hyperparameter tuning
.
Make me correct ..
Thank you!!!!..

shubhamchoudhary
Автор

train test split also has option of stratify will that be useful for imbalance dataset

eramitjangra
Автор

7:22 Aren't we supposed to use X_train and Y_train instead of X and y in the parameters of cross_val_score function?

MrKevZap
Автор

Thanks for the video. Please make video series on pyspark Mllib.

rohitjagdale
Автор

Sir please mAke videos on Bayesian optimization

shilpaprusty
Автор

Sir please same me ...agar Mai python sekh rahe hua to uska liye front end aane jarure hey Kya like HTML, css, JavaScript and frameworks... help me please

sachinpatil
Автор

Hello Krish, can you make a session on how to create Ensemble Model(implementation)

aimen
Автор

Sir for the imbalanced Dataset, can't we first handle the imbalance and then do KFold test?

anshtandon
Автор

Thank you @Krish for such a great explanation. I have a question, cross_val_score(model, X, y, cv = k_fold_validation) we are passing X and y from original data without using training data. We also assume that X and y have missing values. If we do imputation on X and y, is there any data leakage problem in cross-validation? if it is how could we handle it?

vaibhavkhobragade
Автор

Please upload a Video about different Imbalanced Dataset and how to handle techniques, Various feature engineering approaches etc.

pavankumarchahar
Автор

hii sir, plz do a video on KD TREE, Ball tree

harikrishnakokkula
Автор

I hear doing cross Val for entire dataset is wrong . Can you explain why not using it in training set

gwapdamathtutor