Mastering Hyperparameter Tuning with Optuna: Boost Your Machine Learning Models!

preview_player
Показать описание
In this comprehensive tutorial, we delve deep into the world of hyperparameter tuning using Optuna, a powerful Python library for optimizing machine learning models. Whether you're a data scientist, machine learning enthusiast, or just looking to improve your model's performance, this video is packed with valuable insights and practical tips to help you harness the full potential of Optuna.

Interested in discussing a Data or AI project? Feel free to reach out via email or simply complete the contact form on my website.

🍿 WATCH NEXT
Vid 1:
Vid 2:
Vid 3:

MY OTHER SOCIALS:

WHO AM I?
As a full-time data analyst/scientist at a fintech company specializing in combating fraud within underwriting and risk, I've transitioned from my background in Electrical Engineering to pursue my true passion: data. In this dynamic field, I've discovered a profound interest in leveraging data analytics to address complex challenges in the financial sector.

This YouTube channel serves as both a platform for sharing knowledge and a personal journey of continuous learning. With a commitment to growth, I aim to expand my skill set by publishing 2 to 3 new videos each week, delving into various aspects of data analytics/science and Artificial Intelligence. Join me on this exciting journey as we explore the endless possibilities of data together.

*This is an affiliate program. I may receive a small portion of the final sale at no extra cost to you.
Рекомендации по теме
Комментарии
Автор

Hey guys I hope you enjoyed the video! If you did please subscribe to the channel!


*Both Datacamp and Stratascratch are affiliate links.

RyanAndMattDataScience
Автор

I found about Optuna while working on a Kaggle Competition. This video will help me a lot in Kaggle Competitions. Thanks a lot Ryan 👍💯

neilansh
Автор

I gotta give optuna a try I usually just use a gridsearchcv or randomsearchcv for hyper parameter tuning

masplacasmaschicas
Автор

can't see the right part of the code some terms are not understandable, please provide the github link else make your recording screen larger

btw great videos

ritamchatterjee
Автор

Great video! Could explain more about the hyperparameter importance here and what kind of insights can you learn from it.

ruihanli
Автор

Thank you for honest sharing of results! Could it be that train_test_split accidentally created split with unbalanced target? Another reason of getting worse OOS result I can think about is optimizing for mean CV scores without getting variance into account. And the 3rd one I suspect is missing sensitivity study. Like, we found the peak on the train set, but mb in its vicinity it had only valleys or even cliffs (we averaged across data splits but not across neighbor parameters)? And the last option is simple absense of early stopping: the last model can simply be overfit one. Going to recreate your example and find out )

anatolyalekseev
Автор

Good Work. Why Optuna is better that gridsearchcv or randomsearchcv?

becavas
Автор

sir i don't know why you didn't share the code after making the learning project .Please share sir.
q1) At the end Of the video you didn't get the results. So how is the production code really looks like? Do you do any kind of hyperparameter tuning? Or you go on the basis of your knowledge of the different parameters and the intuitions that you have in order to have a much better hyperparameter tuning?

So could you please share your knowledge with respect to the production level code? Sir.

introvertwhiz-llip
Автор

Hey man following you since a while.. Big fan!

deepsuchak.
Автор

Well presented but the performance is poorer than your baseline, correct?

thomashadler
Автор

I enjoyed your video, but it looks as if you're not understanding the point of the `random_state` parameter. It allows for reproducibility. If you put in a value 'x' for the `random_state` you'll always get the same output. That's useful for testing.

trn