Loss or Cost Function | Deep Learning Tutorial 11 (Tensorflow Tutorial, Keras & Python)

preview_player
Показать описание
Loss or a cost function is an important concept we need to understand if you want to grasp how a neural network trains itself. We will go over various loss functions in this video such as mean absolute error (a.k.a MAE), mean squared error (a.k.a MSE), log loss or binary cross entropy. After going through theory we will implement these loss functions in python. It is important to go through this implementation as it might be useful during your interviews (if you are targeting a role of a data scientist or a machine learning engineer)

Exercise: Go at the end of the above notebook to see the exercise

🔖 Hashtags 🔖
#lossfunction #costfunction #costfunctionneuralnetwork #lossfunctionneuralnetwork #costfunctiondeeplearning #lossfunctiondeeplearning

Why not MSE for logistic regression:

Prerequisites for this series:   

Рекомендации по теме
Комментарии
Автор

Loving this series. You're so good at looking at everything from the learners perspective and clarifying everything! Much better than all the other tutorials I've tried

ashwin
Автор

When changing y_predicted to avoid errors with log(0) it is possible to do [ abs( y - epsilon ) for y in y_predicted ], which will work for both y=1 and y=0 cases

viniciushedler
Автор

Sir in binary cross entropy the loss function should be c=-- 1/n sum (y*log(p) + (1-y)log(1-p))
Where p is the probability of the sample for the positive class.
By the way nice intuitive lectures, I love you way of teaching 🙏

priyabratapanda
Автор

Feedback: You can create chapter of the tutorials, will be helpful to navigate

BahauddinAziz
Автор

U should not hesitate to mention Andrew NG 's name. We all learn from somebody. Andrew NG has his own expertise and u have ur own.
It is always perfectly okay to share from sources we learn.

meenakshichippa
Автор

Instead of doing manually we can avoid log(0) using numpy:
epsilon = 1e-15
y_predicted_new = np.clip(y_predicted, epsilon, 1-epsilon)

sakalagamingyt
Автор

That log loss vs MSE article was a BOOM!

mohitupadhayay
Автор

it's very easy exercise i just replace the absolute with square and it done...thanks for this playlist

GamerBoy-iijc
Автор

i was frightened from that corona virus mate, it freaked me out😂😂, thanks for this awesome video.

harshkartiksingh
Автор

Great man does a great job. Thank you so much.

izharkhankhattak
Автор

Am just stuck with the series.. It's super interesting

maheshsingh
Автор

We need to upgrade to read the full article that you are referring to in the video...

useryaya-rh
Автор

I was wondering why we have 0.5 as a result for MAE with Python code and 0.26 with Numpy code

bilkisuismail
Автор

min(i, 1 - epsilon) puts all values below even if it is above 1

karanveersingh
Автор

Thank you so much for this course, sir

abhishekfnu
Автор

for real i really performed all those code without using your notebook sir because of the virus warning.😅😅

pickase
Автор

at 17:32 the formula needs a parentheses to begin before yi and end at the end of expression.

manujarora
Автор

You are excellent boss. Allah bless you!

asamadawais
Автор

Thank you so for the detailed explanation. Could you please explain about SGD, ADAM optimizations.

leelavathigarigipati
Автор

very benificial videos, can u upload videos of some online or offline job oppurtunities of machine, deep learning

ronyjoseph