PyTorch Tutorial 06 - Training Pipeline: Model, Loss, and Optimizer

preview_player
Показать описание
New Tutorial series about Deep Learning with PyTorch!

In this part we improve the code from the last part and will learn how a complete training pipeline is implemented in PyTorch. We replace the manually computed loss and weight updates with a loss and an optimizer from the PyTorch framework, which can do the optimization for us. We will then see how a PyTorch model is implemented and used for the forward pass.

- Training Pipeline in PyTorch
- Model Design
- Loss and Optimizer
- Automatic Training steps with forward pass, backward pass, and weight updates

Part 06: Training Pipeline: Model, Loss, and Optimizer

📚 Get my FREE NumPy Handbook:

📓 Notebooks available on Patreon:

If you enjoyed this video, please subscribe to the channel!

Official website:

Part 01:

Linear Regression from scratch:

Code for this tutorial series:

You can find me here:

#Python #DeepLearning #Pytorch

----------------------------------------------------------------------------------------------------------
* This is a sponsored link. By clicking on it you will not have any additional costs, instead you will support me and my project. Thank you so much for the support! 🙏
Рекомендации по теме
Комментарии
Автор

You are so under- rated, this video needs to reach everyone out there who are struggling with the basics,
as videos like this is super rare to find online. Keep up the good work :-)

amanpratapsingh
Автор

I love finally finding a series that explains the absolute bare basics. So tired of tutorials that are like "Ok, so start by importing torch, then use the prebuilt DataLoader class to load a built-in dataset, then run it through a built-in model and print the result. Congrats, you did AI"

zackinator
Автор

If anyone wants to know, the reason you need more iterations with the SGD optimizer than with manual gradient descent is that the SGD algorithm is designed for huge training sets and may use only a random fraction of the samples to compute the loss at every step, while the reason you need even more iterations when you use the nn.Linear model is that by default, it tries to learn a bias (the b in mx+b) which is absent from the real data (2x), giving the model a superfluous parameter which throws off every prediction and bogs down every learning iteration until it gets optimized away. You can remove it by passing bias=False to the nn.Linear constructor and then it converges a lot better.

SquidOnWeed
Автор

Thank you, so much. Surviving university because of guys like you.
Much appreciated!!

jeffinkachappilly
Автор

After watching one video, I subscribed to your channel and planing to finish PyTorch tutorial in 2 days. It is very helpful in understanding all the basics. Highly appreciate your efforts in making this course, keep up the good work:-)

SureshKumar-tmxy
Автор

Holy crap, thank you for making this series. I've been struggling with this stuff for months now and was finding it nearly impossible to make headway towards really grasping it. The basics seemed impossible to get a firm hold of and apply. These videos and how you broke everything down to the most basic levels made it so much clearer. It feels like I've made more real headway in the last couple hours than in the month before that.

iEdp_
Автор

Dude, this is fucking better than any tutorials I've ever seen

callforpapers_
Автор

Really Awesome Series of Videos with no words to express my Gratitude

praveenchalampalem
Автор

This is the good tutorial ever I have seen for PyTorch

venkatesanr
Автор

Thanks for this awesome series on Pytorch. I would definetely love to see more!

michelolzam
Автор

Thanks very much for this tutorial! It really helps me a lot. Even though I am doing deep learning, I never went through the basics systematically step by step, and your videos make me understand things better. Keep up your amazing work!

DFan-ucqz
Автор

I liked every single one of your video I watched so far. Thanks so much for the tutorial!

haoteli
Автор

Great stuff, helped me break through some of the basics!

romp
Автор

7:56 here, output_size = n_features is just because n_features is 1. I think you should simply write output_size = 1 to avoid confusing. For regression problem, in general, output_size is 1.

harris
Автор

Your channel is very helpful my friend ❣️

vishalwaghmare
Автор

Damn.. This was the bestt video i had ever seen of pytorch.. Thank you so much for the amazing amazing amazing content!

madhurabhalke
Автор

Just what I needed and have been looking for.

ronstubed
Автор

I just wanted to thank you for your awesome codes, man I learned a lot here. please don't stop and keep going man

hosseinnikraveshmatin
Автор

Wow, man!! You are awesome. Thanks for uploading this playlist. Much Love from India

bhabeshmali
Автор

Thank you very much for your excellent course. Just because I was unfamiliar with the syntax of classes but had taken a machine learning course at university, I needed to pause the videos and search for some concepts. (such as super(), __init__ and so on)

arashsajjadi