Custom Activation and Loss Functions in Keras and TensorFlow with Automatic Differentiation

preview_player
Показать описание
TensorFlow includes automatic differentiation, which allows a numeric derivative to be calculate for differentiable TensorFlow functions. This allows you to easily create your own loss and activation functions for Keras and TensorFlow in Python.

The code for this video can be found here:

Рекомендации по теме
Комментарии
Автор

Great Video on Auto Grads, Amazing as Always, Loved it Dr.Jeff

mockingbird
Автор

Thank you so much for this amazing video.

subhajitpaul
Автор

Hi Jeff. Thank you so much. I spent couple of hours figuring out how the hell Keras is managing any changes in custom loss so easily. I was worried if it is even checking if the function is differentiable. With this video, things are pretty clear now.

ChandraShekhar-rnty
Автор

learning from the legend. It was an amazing experience. Thank you

kbd
Автор

i was actually looking for this gradienttape() everywhere, thankyou..finally my doubt is cleared ... :-)

prajith
Автор

Small correction @1:36 : You don't "take the partial derivative of each weight". You do take the partial derivative of the loss function with respect to each weight. Also @7:24, the derivative of x^2 is 2x, not x. Also, @7:46, that IS the definition of ANALYTIC derivation. It is also used in the discrete case, the difference being that the jumps are finite, not infinitesimal.

NisseOhlsen
Автор

Awesome work, liked and subscribed, excited to see more.

tonihullzer
Автор

Thank you for this video, this question was very important to me and now I know how to work it out)

slime
Автор

Thank you for a good lecture.
I have a question.

y = tf.divide(1.0, tf.add(1, tf.exp(tf.negative(x))))
vs
y = 1.0/(1+tf.exp(-x))

Is there any difference?

heecheolcho
Автор

does gradient tape break when math operations are applied on custom indexes of input_tensor? also while stacking tensors and then using in our loss function? Please suggest a workaround, I've been trying to implement it but it returns all gradients as NaN

tanyajain
Автор

Thanks for the amazing explanation, I finally understand GradientTape (I think at least haha).

shunnie
Автор

Hi Jeff,

Thank you for all these great videos. I have a question about tensorflow. If I create a model but have no hidden layers does this make my model not a neural network but linear discriminant analysis. Like this

: model = keras.Sequential([
keras.layers.Dense(12, activation="relu"),
keras.layers.Dense(3, activation="softmax")

StormiestOdin
Автор

Hi Jeff, is there a way to access y_pred information?
I want to build my loss function, but not using a conventional way, which pass y_pred and y_true to a tf of backend function.

I need to apply a step to access y_pred information and after that apply a function to estimate the std and return the std value as the output of my loss function.

Do you know how to do this?

tonsandes
Автор

Also, in the beginning you said the derivative of x^2 is x. It is 2x.

brubrudsi
Автор

Stranget, why did I get "Tensor("AddN:0", shape=(), dtype=float32)" as output instead?

zonexo