The Fundamentals of Autograd

preview_player
Показать описание
Autograd is the automatic gradient computation framework used with PyTorch tensors to speed the backward pass during training. This video covers the fundamentals of Autograd, including: the advantages of runtime computations tracking, the role of Autograd in model training, how to determine when Autograd is and is not active; profiling with Autograd, and Autograd's high-level API.

Рекомендации по теме
Комментарии
Автор

This is one of the most eloquent and succinct videos on training in pytorch I've seen.

Sami_Wilf
Автор

Nice set of videos. I just wish the audio volume as higher.

HrishikeshMuruk
Автор

Thank you so much for your clear explanations! Makes the API much easier to understand when the mathematical intuition is already there!

cyberacula
Автор

11:33 Did you mean: the partial derivative of loss with respect to INPUTS (not learning weights)?

James
Автор

After running zero_grad the grads are not zeroed but .grad is set to None type. Ie if i just print .grad[0] i get a NoneType object error

Maybe change the last cell to


if model.layer2.weight.grad is not None:


for i in range(0, 5):
prediction = model(some_input)
loss = (ideal_output - prediction).pow(2).sum()
loss.backward()



optimizer.zero_grad()



if model.layer2.weight.grad is not None:

gebbione
Автор

very well explained series of videos. thank you. i however feel bad for your enter key.

FavourAkpasi
Автор

This is the official doc video but with such a low quality audio?

cagdastopcu
Автор

Good video. Go softer on the keyboard, please.

PurtiRS
Автор

awesome that was very succinct and clear.

wayneqwele
Автор

So THAT'S how it works... cool! Thx!

atari