Kaggle's 30 Days Of ML (Competition Part-4): Hyperparameter tuning using Optuna

preview_player
Показать описание
This video is a walkthrough of Kaggle's #30DaysOfML. In this video, show you how you can use #Optuna for #HyperparameterOptimization. We will use xgboost but you can use the same method for any model!

Note: this video is not sponsored by #Kaggle!

Please subscribe and like the video to help me keep motivated to make awesome videos like this one. :)

Follow me on:
Рекомендации по теме
Комментарии
Автор

Please subscribe and like the video to help me keep motivated to make awesome videos like this one. :)

abhishekkrthakur
Автор

Thank God you are back 😭😭😭 Once I get a job related to DS, definitely will pay you back brother!

geekyprogrammer
Автор

Woaaah !!! after this optuna tuning I jumped from 1500 rank to top 300 !!! :-)

madhuful
Автор

Thanks for the videos. These are very helpful.

t-m
Автор

Beautifully explained GradMaster🙌 I was searching for a tutorial like this for a long time. Also, your book is amazing.

vivekchowdhury
Автор

Thank you so much for your clear explanations of what I believe are pretty complex concepts. It has been a great experience learning from you over the past few weeks.

soumyasubhrabhowmik
Автор

Abhishek, you're amazing! Thank you so much for sharing this valuable knowledge!

agamenon
Автор

Thank you very much, I learned a lot from this video

pandasaspd
Автор

Thanks for the videos.Hope u can do more series for other tutorial and competition.

loguansiang
Автор

Nice explanation of optuna, i giggled when you said n_estimators =7000 is small since i have a 8GB PC 😁

floopybits
Автор

hi if i use cnn1d model what code if i use optuna for cnn1d to optimize filter and kernal ??

Yu-ndkr
Автор

Sir can you please share the link to the other video you mentioned here in the beginning. Thank you.

AI-Kawser
Автор

love the video!
was wondering - why not incorporating cross_val_score within the study?
won't it deliver better results in terms of model selction?

i tried my best to incorporate it but couldn't find an elegent solution (maybe i don't even have to)

eyalbaum
Автор

What does log=true do in suggest_float? How is it different from suggest_loguniform or suggest_discrete_uniform?

priyanksharma
Автор

what is the reason for models giving high accuracy with cpu and not so high or a bit less with gpU? any technical reasons to know?

kiranchowdary
Автор

Hi Abhishek,
In optuna code, i observed that u have commented the GPU params in XGBRegressor while submitting predictions, but same were enabled during hyperparam tuning. Is there a reason?

madhuful
Автор

this great, but up to now I am stuck in 100th place :(, Now I am trying this technique for 1000 trials to see the best parameters

mohamadosman