Deep Learning-Activation Functions-Elu, PRelu,Softmax,Swish And Softplus

preview_player
Показать описание
This video is the continuation of the activation functions from my complete deep learning playlist.In this video we will cover the ELU, Prelu,Softmax,Swish and Softplus Activation functions.
All Playlist In My channel

Please donate if you want to support the channel through GPay UPID,

Please join as a member in my channel to get additional benefits like materials in Data Science, live streaming for Members and many more

Please do subscribe my other channel too

Connect with me here:

#DataScientist
Рекомендации по теме
Комментарии
Автор

Dear Krish: we all love you, your energy, your enthusiasm.

One point about derivative of ReLU activation Function at zero.

To express it properly, derivative of ReLU as x tends to zero does not exist because the derivative is a step function and at zero, it is discontinuous. And limit approaching from left is not equal to limit approaching from right.

The ReLU function is continuous, not-bounded and not zero-centered. At x = 0, the left hand derivative of ReLU is zero while right hand derivative is 1. Since the left hand derivative and the right hand derivative are not equal at x =0, ReLU function is not differentiable at x = 0.

Derivative of Leaky ReLU at zero is still discontinuous. Hence, it is still not differentiable at zero.

Generally, in Geophysics, we use Leaky ReLU as follows:

f(x) = 0.1*x if x<0
f(x) = x if x>0

Else, I am your Bhakt and I want to start your (iNeuron) - 3 courses as soon as I reach India. Master Machine Learning
Masters Deep Learning
Masters NLP.

Congratulations for your new position as CTO at iNeuron.

Cheers, Roy

sukumarroychowdhury
Автор

can you share the github link for the code

arpanghosh
Автор

Thank you sir for making this video really very helpful and sir can you please provide us this notebook.

DeepakSaini-sgpq
Автор

Thank you sir you really made the concepts related to different activation functions so clear.

ratulghosh
Автор

Such a Amazing sir, Please put a link for this notebook in the discription that will help us to revise more about this

hariharans
Автор

Make theoretical video on white board..because most of the people familiar with that..Than you krish..big fan of you🖤🖤

pritamH
Автор

Really very Helpful !!! Can I get this notebook?

rohanyewale
Автор

hi Krish.. can you pls provide the link for this notebook? Great content and nice explanation.. :)

chitramethwani
Автор

Thank you so much sir for taking out time and effort to put out such great content!

Sudeepdas
Автор

f(x)=x is connect. f(x)=0 is disconnect. ReLU is then a switch. A ReLU neural net is a switched system of dot products. Fast transforms like the FFT and fast Walsh Hadamard transform are fixed systems of dot products that you are free to mix in.

nguyenngocly
Автор

Hi Krish

Can you please provide the Activation functions notebook for our reference.

kavuluridattasriharsha
Автор

U r ultra best teacher Sir .. Ultimate and even better...

satwindersingh
Автор

Hi Krish, thanks for the lovely explanation. I have one question. Why does zero centered data converge faster. Can anyone explain this?

arijitmukherjee
Автор

Hi Krish.... Well explained... Could you please help me with the Jupyter notebook for this activation functions script....

randhirpratapsingh
Автор

Very informative video sir
Please can you share the link to notebook

abhinaykumar
Автор

When sigmoid is used only output layer then why you used it in hidden layer in before videos sir?

heecmat
Автор

How can i get the ipynb file that you described here ! Thank you.

techsavy
Автор

Hi, where can we find jupyter notebooks or the notes for the videos?

shubhibansal
Автор

If ReLU has zero or one output then why we don't value step function?

smarttaurian
Автор

Naik Sir can you make fully explained vedio on YOLO algorithm with its working program

DragonOO