PyTorch Lecture 04: Back-propagation and Autograd

preview_player
Показать описание

Рекомендации по теме
Комментарии
Автор

Thanks a lot!!! Your videos are not just about the framework. You explain every bit in detail.

debanjansengupta
Автор

Great explanation of Backprop! I really like the way you visualise the math. Keep it up! :)

khoiphan
Автор

Actually now torch.Tensor and torch.autograd.Variable are now the same class. This means instead of calling w = Variable(torch.Tensor([1]), requires_grad=True) you can call w= <3 great tutorials btw

MucciciBandz
Автор

Here maths meet the codes. Kim this is awesome that you take out time and explained us the gradient wonders of Deep learning .Video is evidence that pytorch is awesome and kim know how to teach Back Propagation using pytorch.

crvnse
Автор

Thank you for clarifying the backpropogation using computational graphs

VIVEKPANDEYIITB
Автор

Very good video. Wonderfully explained. Thanks a lot my friend.
Keep up the good work

Автор

Thanks a Sung Kim! This is the most straightforward explanation I have ever seen

nagong
Автор

The very excellent explanation of Backprop. I appreciate it. Thank you professor :)

bayesianlee
Автор

Great explanation. Helped me a lot! Good visualisation also.

reinvanlennep
Автор

It's the best explanation I've ever seen! Thank you!

alexeilazarev
Автор

13:32 i do, I see the results of two different table decimals behind?, from numeric gradient computation only 2 decimals?

taibn
Автор

May i know at 12:58, why w.data should minus learning*w.grad.data, instead adding to it? thanks

pleung
Автор

Great series! I have a few inquiries: 1. How is it that l gets the method backward available to it? Is this the case for any value whose computation required a Variable? 2. is w.grad.data a 1 dimension tensor requiring use of index [0] to get at the one value within it?
3. I realize this is asked already but it's not clear to me why we have to zero the gradients.. Like what happens if we don't?
Thank you

bluelight
Автор

Thanks great work. One question. What is the exact difference between .grad and .grad.data?

DanielWeikert
Автор

Awesome! Superb explanation!! Keep it up.

bosoninfo
Автор

if you guys are getting objecst printing like *
3.1854)" and want to get just the value without the word tensor add '.item()' to the end of what ever it is you are doing
i.e. 'l.data[0].item()'

akramsystems
Автор

Hi, thanks for the video. Just a question: why is it necessary to initialize each time the gradients? (w.grad.data.zero_())

cuenta
Автор

why is dy^ - y / dy^ = 1 and not -1? can someone simplify it?

KabyanilTalukdar
Автор

What is the difference between w.grad and w.grad.data?

sudharsunkaleeswaran
Автор

what is the difference between w and w.data ?

charlesenglebert