PyTorch Tutorial 16 - How To Use The TensorBoard

preview_player
Показать описание
New Tutorial series about Deep Learning with PyTorch!

In this part we will learn about the TensorBoard and how we can use it to visualize and analyze our models. TensorBoard is a visualization toolkit that provides the visualization and tooling needed for machine learning experimentation:

We will learn:
- How to install and use the TensorBoard in Pytorch
- How to add images
- How to add a model graph
- How to visualize loss and accuracy during training
- How to plot precision-recall curves

📚 Get my FREE NumPy Handbook:

📓 Notebooks available on Patreon:

Part 16: How To Use The TensorBoard

If you enjoyed this video, please subscribe to the channel!

Official website:

Part 01:

Further Readings:

Code for this tutorial series:

You can find me here:

#Python #DeepLearning #Pytorch

----------------------------------------------------------------------------------------------------------
* This is a sponsored link. By clicking on it you will not have any additional costs, instead you will support me and my project. Thank you so much for the support! 🙏
Рекомендации по теме
Комментарии
Автор

Finally my search on youtube ends here. I wanted someone who uses OOPS in python to solve machine learning problems and you are the only person on youtube who does this. Thank you so much. I request you please please please make more videos and more frequently on ML using OOPS

flamboyantperson
Автор

Nice tutorial, I just have one concern. Suppose that your batch_size is 64, in that case, you would have a total of 938 batches, with the first 937 batches having 64 examples, and the last batch having 32 examples. If we specify, (i+1)%100 == 0, then we are computing the average loss and accuracy for the 100 steps. But when the value of i exceeds 900, you would accumulate the loss and correct predictions for the remaining 38 batches, and then add them in the next epoch when the number of steps becomes a factor of 100 again (in this case 100). So, essentially, you would be computing the loss as [loss (38 steps from the last epoch) + loss (100 steps from the current epoch)] / (100) which would increase your loss and also increase the accuracy. Just wanted to highlight this. A good idea would be to add another variable called steps_seen, which is incremented every time a batch/step is processed and set that to 0 similar to running loss and correct predictions. In this way, even when you compute the loss when the current step is not an absolute factor of 100, you would still compute the loss and accuracy as -> [loss (38 steps from the previous epoch) + loss (100 steps from the current epoch)] / (38 + 100).

skymanaditya
Автор

Haven't finished the video yet so I apologize if you already fixed this / went over it. But I noticed around the 9 minute mark we're told to use "writer.add_graph(model, example_data.reshape(-1, 28*28))" which works, but only if you're using the CPU. As example_data is currently on the CPU (unless I did something wrong which is very possible). I'm using a GPU and all that was needed for me to fix it was change that to "writer.add_graph(model, example_data.reshape(-1, 28*28).to(device))" and boom problem solved. Anyways, awesome tutorials!!!

starblasters
Автор

This helped me a lot. Thanks for your kind explanation!

MKim-ncgr
Автор

Thank you so much for this helpful tutorial. 🍀🙏

fatemehbehrad
Автор

Very well done!
I am watching your videos to revise my info. :D

sarahjamal
Автор

very helpful! thanks. please keep uploading more tutorial for pytorch

mudloc
Автор

Excelent explanation! extremely useful, thanks

miscelanea
Автор

Great Vid! I think you shouln't have appended predicted to your labels because that not the ground truth (correct label) and it is the estimated/predicted label, thats why you get a perfect PR curve

raminessalat
Автор

thank you so much!! this really amazing

hazemahmed
Автор

Hello Python Engineer, thank you for this video, I relly found it helpful. I am having one challenge though, how can I run the visualisation on a gpu server (nvidia gou) that I want to use for my training?

ea-ij
Автор

Why we see two lines on the graph for training and accuracy graphs in tensorboard?

anonim
Автор

Just finished the pytorch playlist. Loved your content. Will you making tutorials on RNN and LSTM with pytorch?

SurajSubbarao
Автор

Hello, why do you append the predicted data, when the documentation says that it needs to be ground truth? I find that a bit confusing :(

nougatschnitte
Автор

can you able to add a pretrained object detection code?

helloansuman
Автор

@2:57 do you know how I can change the localhost address if I wish to?

seyeeet
Автор

Hmmm shouldnot the last line of the code in line 157 the writer.close() be out of the for loop? what does the writer.close() do basically?

seyeeet
Автор

what is the advantage of using tensorboard? we could just use matplot lib to visualise it right?

kaviyarasanpr
Автор

Another great tutorial, thanks a lot! I have a small question: how can I clear TensorBoard?

dinamoses
Автор

Hello, there is one issue in the line writer.add_graph, example_data should add a .to(device) function. And I have a question about the use of torch.stack, is the aim of this operation is transforming the data type of per batch from list to tensor? I'm a little confused

xinqiaozhao