All Hyperparameters of a Neural Network Explained

preview_player
Показать описание
Neural Networks have a lot of knobs and buttons you have to set correctly to get the best possible performance out of it. Although some of them are self-explanatory and easy to understand (like the number of neurons in the input layer) and choose, there are many hyperparameters that are a bit more complex in terms of how they affect the outcome of the model (e.g. number of layers, the batch size or weight initialization).

In this lesson, we will take an overall look at all possible hyperparameters and understand on a high level what they mean and how they affect the performance of the network. In the coming lessons, we will dive deeper into the details of these hyperparameters.

❓To get the most out of the course, don't forget to answer the end of module questions:

👉 You can find the answers here:

RESOURCES:

COURSES:

Рекомендации по теме
Комментарии
Автор

Thank you for that simple and clear explanation.👌

Kadirelm.s
Автор

No better explanation can be made as compared to this one. i can't thank you enough for your video.

AbdulSaboorKhan-os
Автор

Amazing visualization for hyperparameters. thanks a lot. 🙏🏻

behradio
Автор

How does hyper parameters use mutation process in any nature inspired algorithms in hybrid network architecture

koushalyas
Автор

As I understand it, you should continue to train as long as the loss on the training data and the loss on the test/unseen data continue to decline. There will be a point where the two (training and test/unseen datasets) will diverge. Ideally, you should save the model just before that point. At that point, the model loss on the training data will continue to decline but the model will fail to generalize on the test/unseen data.

I'm looking forward to learning about regularization in the next video!!!

lakeguy