Lec 2.1: Cost Functions for Logistic Regression

preview_player
Показать описание
--------------------
Video Summary:

In this video, we will introduce the cost functions in logistic regression in a deep neural network context.

The first equation presents the probability that the picture has an iPhone given the input x.
y-hat is your estimate of y, which is your actual value.
In the previous lecture, we have presented that our goal is to develop a neural network that can fine-tune the parameters of w and b for an input X so that the value of y hat is as close as possible to the actual value y.

The difference between y and y-hat can be calculated using the loss function of logistic regression which is presented in the equation in the lecture.

The loss function calculates the y-hat value for a single training example, that is a single image. The training data is the data you want to fine-tune your algorithm parameters on.

The cost function calculates the average loss for the entire training data
The training dataset is denoted by m.

On another note:
The sigmoid function is used for the two-class logistic regression, whereas the softmax function is used for the multiclass logistic regression. Basically, The SoftMax function is an extension of the sigmoid function to the multiclass case.

--------------------
Please don't forget to like and subscribe to encourage me to post similar content.
--------------------
Feel free to connect with me on LinkedIn
--------------------
Рекомендации по теме