134 - What are Optimizers in deep learning? (Keras & TensorFlow)

preview_player
Показать описание

Рекомендации по теме
Комментарии
Автор

Thank you for your video! Love the analogies with the blind folded hiker and the ball, really makes sense to me now!

andreeamada
Автор

Very good video. learned the functioning of optimisers in just 8 minutes.

zeshn
Автор

Thank you for explaining the concepts so clearly.

rshaikh
Автор

You are the BEST teacher. Thank you!!! All the best for you sir Sreeni.

gvomrtt
Автор

Hallo There, so like the way you explained tthe concept

blaisofotso
Автор

Excellent Explanation. Thank you so much. One question however. So you are saying when I use Adam optimizer I dont have to explicitly define the learning rate right? but what happens when I do - optimizerr = . Now what does that mean? My understanding is that the Adam optimizer starts with a learning rate of 5e-5 and it will take it from there? Is that so ? TIA.

himalayakilaru
Автор

Hi Sreeni, thanks for the video. Regarding the default values, in the TensorFlow description of Adam, they wrote "The default value of 1e-7 for epsilon might not be a good default in general. For example, when training an Inception network on ImageNet a current good choice is 1.0 or 0.1". Does it make sense to test several values here?
Also, I wondered whether it makes sense at all to pass a learning rate schedule to Adam?

manuelpopp
Автор

Is it possible, please, to attach for us a link to the research paper that talks about Adam optimizes?

MohammedHAzeez
Автор

or when we talk about optimisation, are we talking about finding the best parameters? E.g. similar to how it's done with hyperparameter tuning for RF, DT, etc...

mychanneltest
Автор

Hello sir, it is very much informative for beginners. if possible make tutorial on stacked denoising autoencoder for intrusion detection also positively

nirmalanarisetty
Автор

Hi sir, could you upload slides for all videos you posted ?

minhaoling
Автор

2:25 Doesn't TF transform the equations used for the input into the respective derivative? It's mathematically different from probing 2 points.

wiczus
Автор

Hi Sreeni, I am a beginner with python, just learning the hooks. I thought every ML model tries to reduce the error anyway (e.g. linear regression by fitting the line and reducing the residuals...) So, what do we need optimizers for then? I don't get it. Can anyone explain?

mychanneltest
Автор

Sir please make tutorial on image processing and segmentation with deep learning.

samarafroz
Автор

Sorry, I don't understand the role of the optimizer. We know the whole objective function is derivable. I thought we are just moving in the opposite direction of this derivative. Why did you say that the optimizer keep testing directions? Thanks!

thebiggerpicture__