End-to-End: Automated Hyperparameter Tuning For Deep Neural Networks

preview_player
Показать описание
In this video, I am going to show you how you can do #HyperparameterOptimization for a #NeuralNetwork automatically using Optuna. This is an end-to-end video in which I select a problem and design a neural network in #PyTorch and then I find the optimal number of layers, drop out, learning rate, and other parameters using Optuna.

Please subscribe and like the video to help me keep motivated to make awesome videos like this one. :)

00:00 Introduction
01:56 Dataset class
08:19 Cross-validation folds
13:38 Reading the data
24:10 Engine
29:48 Model
35:10 Add model and engine to training
43:05 Optuna
49:02 Start tuning with Optuna
52:50 Training, suggestions and outro

Follow me on:
Рекомендации по теме
Комментарии
Автор

everytime you code, i learn something new. please never stop coding end-to-end in your videos. thank you, you are amazing!

nithinisfun
Автор

Great explanation! Making lives easier one layer at a time :)

AnubhavChhabra
Автор

Super cool Abhishek. Loved every section, especially the "poor man's early stopping"... ;-)

sambitmukherjee
Автор

This video was really helpful. It was 1 hour bootcamp covering everything about ANN with pytorch- from loading datasets, defining neural network architecture and optimizing the hyperparameters with optuna.

priyankasagwekar
Автор

Thank you for sharing your knowledge. This is an amazing tutorial with no inaccessible jargons. 10/10 highly recommend.

mikhaeldito
Автор

I am writing a research paper in this area. I can't wait!

ephi
Автор

Really appreciate the effort you put in the video. This is world class. Thank you

Phateau
Автор

Love the video, Hyperparam optimisation is one of my favs and this video tops it all, so now I gotta do this on my model training! :tada:

neomatrix
Автор

This is first time I am watching your video. Very informative !!!. Thanks for sharing 😇

lokeshkumargmd
Автор

Every time some things new.. thank you so much

shaikrasool
Автор

That was a very informative session. Is Hyperparameter tuning covered in your book? I think I should buy a copy!! Thanks

jeenakk
Автор

wonderful mate, much appreciated for sharing it

TheOraware
Автор

Thanks for the amazing video! Here in this example will the hidden size and dropout change for each hidden layer or remain same for the hidden layers?

sindhujaj
Автор

Hi Abhishek, just landed up on this video. I am not sure whether you addressed this earlier. I am curious to know your preference of torch as against tensorflow or keras.

kannansingaravelu
Автор

A general question: Is HPO hyped? If ensemble performs much better, should we invest time in HPO given we have limited time?
Thoughts!!

AayushThokchom
Автор

awesome. But one question that, how to deal with overfit and underfit issue while building the end-to-end fine-tuning model ?

tiendat
Автор

Thanks for a great video! So just to be clear: you’re using standard 5 fold CV thus optimising for a set of hyper parameters that finds the best loss across (the mean of) all 5 folds. Wouldn’t it be more suitable to split the train data into train / val and then optimize the hyper parameters individually for each fold (nested CV) ?

kaspereinarson
Автор

Hi Abhishel, very cool video as always, don't you think we should reset the early_stopping_counter at 0 after a new best_loss is found (line 62 at 41:20 in the video). Thanks !

MadrissS
Автор

Do you have any blogs??, I like reading more than watching

siddharthsinghbaghel
Автор

Great. Waiting eagerly.
Will you use (sklearn)pipelines?

kuberchaurasiya