L8.3 Logistic Regression Loss Derivative and Training

preview_player
Показать описание

Now that we understand the forward pass in logistic regression and are familiar with the loss function, let us look at the loss derivative (or gradient) with respect to the weights. Then we can apply gradient descent to update the weights to minimize the loss and thereby optimize the prediction accuracy.

-------

This video is part of my Introduction of Deep Learning course.

-------

Рекомендации по теме
Комментарии
Автор

Is the neg log likelihood function in the 5:00 wrong? It should be a negative sign in the second term, which is -(1-yi)?

jiangshaowen
Автор

At 11:44, when you take the derivative, should it be 1/n * (y - yhat) since you are taking derivative with respect to y? Did you miss a negative sign?

algorithmo