Deep Learning With PyTorch - Full Course

preview_player
Показать описание
In this course you learn all the fundamentals to get started with PyTorch and Deep Learning.

Get my Free NumPy Handbook:


~~~~~~~~~~~~~~~ CONNECT ~~~~~~~~~~~~~~~

~~~~~~~~~~~~~~ SUPPORT ME ~~~~~~~~~~~~~~

#Python #PyTorch

Timeline:
00:00 - Intro
01:42 - 1 Installation
07:30 - 2 Tensor Basics
26:02 - 3 Autograd
42:00 - 4 Backpropagation
55:18 - 5 Gradient Descent
1:12:53 - 6 Training Pipeline
1:27:14 - 7 Linear Regression
1:39:30 - 8 Logistic Regression
1:57:56 - 9 Dataset and Dataloader
2:13:28 - 10 Dataset Transforms
2:24:14 - 11 Softmax and Crossentropy
2:42:36 - 12 Activation Functions
2:52:40 - 13 Feed Forward Net
3:14:18 - 14 CNN
3:36:30 - 15 Transfer Learning
3:51:30 - 16 Tensorboard
4:17:14 - 17 Save & Load Models

----------------------------------------------------------------------------------------------------------
* This is a sponsored link. By clicking on it you will not have any additional costs, instead you will support me and my project. Thank you so much for the support! 🙏
Рекомендации по теме
Комментарии
Автор

I hope you enjoy the course :)


* This is a sponsored link. You will not have any additional costs, instead you will support me and my project. Thank you so much for the support! 🙏

patloeber
Автор

Wow this is so cool Patrick, a free course on PyTorch, great value you are bringing to the community 😆

DataProfessor
Автор

Incredible tutorial, thank you! Some corrections:
- 1:12:02 correct gradient function in the manual gradient calculation should be `np.dot(2*x, y_predicted - y) / len(x)`, because np.dot results in a scalar and mean() has no effect of calculating the mean. (TY @Arman Seyed-Ahmadi)
- 1:23:52 the optimizer is applying the gradient exactly like we do, there is no difference. The reason the PyTorch model has different predictions is because 1) you use a model with a bias, 2) the values are initialized randomly. To turn off the bias use `bias=False` in the model construction. To initialize the weight to zero use a `with torch.no_grad()` block and set `model.weight[0, 0] = 0`. Then all versions result in the exact same model with the exact same predictions (as expected).

straighter
Автор

This is one of the very few videos which is teaching Pytorch from the ground up! Beautiful work, @Python Engineer. Highly recommend it for any newbie + refresher.

sohamdas
Автор

This is a fantastic tutorial, thank you for sharing this great material!

There is one mistake though that needs clarification:

At 1:12:02 it is mentioned that the code with automatic differentiation does not converge as fast because "back-propagation is not as exact as the numerical gradient". This is incorrect: the reason why the convergence of the two codes are different is because there is a mistake in the gradient() function. When the dot product np.dot(2x, y_pred_y) is performed, the result is a scalar and .mean() does not do anything. Instead of doing .mean(), np.dot(2x, y_pred_y) should simply be divided by len(x) to give the correct mean gradient. After doing this, both methods give the exact same convergence history and final results.

armansa
Автор

Thanks for the awesome course! The material is extremely well curated, every minute is pure gold. I particularly liked the fact that for each subject there is a smooth transition from numpy to torch. It's perfect for someone who wants a quick and thorough deeplearning recap and get comfortable with hands-on pytorch coding.

spkt
Автор

Thanks for the course Patrick! It was a great refresher!
BTW, at 3:42:02, in the newer versions instead of pretrained=True it is changed to weights=True.

kamyararshi
Автор

I just completed the course on ML from scratch from Python Engineer. It was a great course for someone who learned all those algorithms in the past and wants to see how they get implemented using basic python lib and numpy.

ozysjahputera
Автор

The best Pytorch tutorial online, I love how you explained the concepts using simple example and built on each concept one step at a time

hom
Автор

This is probably one of the best tutorials I've ever seen for pytorch. Thank you so much.

victorpalacios
Автор

This is the best course on this topic I've seen so far. It is perfect when you want to understand what you're doing and the way things are brought is very pedagogic.

wohnsdj
Автор

This is literally incredible. Perfect mix of theory and actual implementation. I can't thank you enough

liorcole
Автор

The best hands-on tutorial on PyTorch on YouTube! Thank you!

terryliu
Автор

One of the best PyTorch tutorial series on YouTube :)

danyalzia
Автор

On 4:14:00, I think you should use the ground truth as the labels rather than the predicted (line 130). Because the PR curve use the ground truth and predicted score to paint

yan-jieli
Автор

Finally PyTorch doesnt seem as scary as it was before. The best tutorial I could find out there and I understood everything you've said. Thanks a lot.

shunnie
Автор

For the feedforward part, you need to send the model to the GPU when instantiating it:
model = NeuralNet(input_size, hidden_size, num_classes).to(device)
if your device is 'cuda' and you forget the '.to(device)' you will get an error.

alexcampbell-black
Автор

Patrick, you're a legend. Thank you so much for this tutorial. Now on to more advanced stuff!

shatandv
Автор

Thank you Python Engineer! This is the best tutorial video I've ever seen about pytorch.

jiecao
Автор

I don't even need to watch it to know its quality. Can't wait to watch it and thanks for uploading!

Barneymeatballs