Vanishing and exploding gradients | Deep Learning Tutorial 35 (Tensorflow, Keras & Python)

preview_player
Показать описание
Vanishing gradient is a commong problem encountered while training a deep neural network with many layers. In case of RNN this problem is prominent as unrolling a network layer in time makes it more like a deep neural network with many layers. In this video we will discuss what vanishing and exploding gradients are in artificial neural network (ANN) and in recurrent neural network (RNN)

#vanishinggradient #gradient #gradientdeeplearning #deepneuralnetwork #deeplearningtutorial #vanishing #vanishingdeeplearning

#️⃣ Social Media #️⃣

❗❗ DISCLAIMER: All opinions expressed in this video are of my own and not that of my employers'.
Рекомендации по теме
Комментарии
Автор

AMAZING EXPLANATION SIR....

Please make a video on how do you understand and explain such complex topics so easily, that will help us to self educate ourselves🙌🏻🙌🏻🙌🏻

hardikvegad
Автор

Hi Sir, I appreciate your videos. They're really useful. Can you please make videos that show examples of RNN, LSTM as well as videos on Deep Reinforcement Learning

eitanamos
Автор

Amazing explanations. Thank you very much!

meilinlyu
Автор

EXPLAINATION, VIDEO AND AUDIO QUALITY IS VERY GREAT. PLS GUIDE US WHAT KIND OF SOFTWARE, YOU HAVE BEEN USED FOR RECORDING THE VIDEO

n.ilayarajahicetstaffit
Автор

Thank you very much, sir. Crystal clear explanation!

amirhossein.roodaki
Автор

series of explanation video by video is awsome :)

suryanshpatel
Автор

Please release all videos as soon as possible. 🙏🏻

mandarchincholkar
Автор

Thanks you for the great video. Clear and easy to understand.

anonymousAI-prwq
Автор

Sir, can you please make a video on generative adversial networks and a simple example project which implements GAN?

saifsd
Автор

Thanks a lot. i think there is a typo in the slides as a3 is missing. you have a2 followed by a4.

walidmaly
Автор

4:36 is literally me, lol


amazing explanation tho, thanks so much!

Acampandoconfrikis
Автор

Hi Dhaval, Great content! Really learning a lot from your videos. Do you upload your slides as well? Would be really helpful if I could go through slides when required. Thank you.

tahahusain
Автор

Great Explanation sir 🔥🔥🔥. wonder why you haven't reached M subscribers...!!!!

sahith
Автор

3:33 "Bigger small number" lol

ChessLynx
Автор

Sir, how GRU and LSTM can solve vanishing Gradient problem?? Is there any vedio on that? Kindly let me know..

piyalikarmakar
Автор

As the number of hidden layers grow, the gradient becomes very small and the weights will hardly change.

porrasbrand
Автор

While training deep neural network with 2 units in the final layer with sigmoid activation function for binary classification 2 weights of final layer becomes both 0 leading to same score for all inputs since it only uses bias in sigmoid, what are some reasons for this?

haneulkim
Автор

Hi everyone, I have one doubt, as said in the video many times we do derivative of loss with respect to weights, but the loss is a constant value and derivative of constant is zero, so how the weights are updated, I know its a silly question but can anyone please answer this it would be very helpful

yourentertainer
Автор

if the weights of this single layer are same in RNN then why to back propogate till last why not use only the last word.. and get weight

manojsamal
Автор

Sir how many tutorials are still remaining to complete this deep learning playlist ?
Or how much we have covered this deep learning playlist so far in terms of percentage ?

rohankushwah