PyTorch Tutorial 13 - Feed-Forward Neural Network

preview_player
Показать описание
New Tutorial series about Deep Learning with PyTorch!

In this part we will implement our first multilayer neural network that can do digit classification based on the famous MNIST dataset.

We put all the things from the last tutorials together:

- Use the DataLoader to load our dataset and apply a transform to the dataset
- Implement a feed-forward neural net with input layer, hidden layer, and output layer
- Apply activation functions.
- Set up loss and optimizer
- Training loop that can use batch training.
- Evaluate our model and calculate the accuracy.
- Additionally, we will make sure that our whole code can also run on the gpu if we have gpu support.

📚 Get my FREE NumPy Handbook:

📓 Notebooks available on Patreon:

Part 13: Feed-Forward Neural Network

If you enjoyed this video, please subscribe to the channel!

Official website:

Part 01:

Code for this tutorial series:

You can find me here:

#Python #DeepLearning #Pytorch

----------------------------------------------------------------------------------------------------------
* This is a sponsored link. By clicking on it you will not have any additional costs, instead you will support me and my project. Thank you so much for the support! 🙏
Рекомендации по теме
Комментарии
Автор

10:35 I forgot to send the model to the device! Please call model = NeuralNet(input_size, hidden_size, num_classes).to(device)

patloeber
Автор

Really felt satisfying to be able to put together all that I learned in the previous videos. Thank you for this series!

prudvi
Автор

You have been doing a great job teaching pytorch to beginners like me! Keep it up!

uniwander
Автор

Really fantastic series, keep up the amazing work! Looking forward to your future videos!

danielstoops
Автор

Excellent, clear, and without extra stuff! Thank you!

uncoded
Автор

For me, it works with "samples, labels = next(examples)"
Otherwise, it throws me an error: "AttributeError: object has no attribute 'next'"

shubhamchechani
Автор

Nice.
And a good call on explaining the avoiding a call to a softmax actuation in the model, because the cross entropy criteria does that for us.

juleswombat
Автор

Thanks a lot, I am really enjoying your tutorials. Good job!

fbaftizadeh
Автор

What a great video! Thank you very much :)

guilhermelopes
Автор

Great series, thank you python engineer

yassine
Автор

A quick suggestion: you could add plot/image of the number and show what the NN predicts.

unknown
Автор

hi. love following your series! thanks!!!
can you please elaborate on the torch.max part...
what exactly do you call "values" (you ignore those and store them in "_")? values of what?
why is an index the same as the predicted label?

and what is the "1" passed along with the model output

marfblah
Автор

@Python Engineer
please why do you have to use the zero_grad before calculating the loss function.
Shouldn't we calculate the loss and take the optimized step before using the zero_grad?

leonardmensah
Автор

Wonderful, and amazing teaching sir, thanks a lot.

mohamedsaaou
Автор

thank you for your instructions. it is really helpful for my assignment.

thanhquocbaonguyen
Автор

Many thanks for your great video @PythonEngineer !
Just one question how do you choose the hidden_size ? Does it only represent the number of neurons in the network?

Best !

eb
Автор

It's really awesome content. Just a suggestion, bro, you could add the relevant doubts from comment sections and make a FAQ section, which helps beginners solve their common doubts.

adityajindal
Автор

Thanks!
What's the idea of dividing the "test" data into batches as well? I thought batches are only relevant when training the data...

arikfriedman
Автор

How can the NeuralNet class be rewritten as a multiple regression network? Eg. I want to train a network to predict 3 real values as the output that represent (x, y, z) coordinates. Could I use the class as is? Or do I have to change the forward pass or output dimensions?

Al-nsyw
Автор

Great video, i was wondering if the BCEloss function was applying the sigmoid function before computing the loss just like the Cross entropy apply the softmax function.

neithane