PyTorch using Google CoLab GPU to run your machine learning programs.

preview_player
Показать описание
This is the 2nd video in our PyTorch Series but can be used for any program. We are going to get set up and run programs in Google Colaboratory to take advantage of a GPU. This will allow everyone to progress in this series even if you have an older PC.

Below is the link to our GitHub with all the code and our discord. I am online and other team members if you have any questions or get stuck.

Рекомендации по теме
Комментарии
Автор

thank you for this tutorial, it was a life saver!! i couldn't figure out how to load my model and data to Collab's GPU previously but this cleared up all my doubts!

kkawesomeperson
Автор

Thank you!
I couldn't find the method .to(device) anywhere! Now as you showed it it looks pretty obvious :)

KalmanHuman
Автор

if we don't had the gpu stuff, where it run ? in our computer ? because i didnt add your extra code for test and only "firstTest.png" has an error. (it didn't find it)

Fine_Mouche
Автор

i got a error when i re run using GPU. Expected one of cpu, cuda, xpu, mkldnn, opengl, opencl, ideep, hip, msnpu, xla, vulkan device type at start of device string: cude

yilu
Автор

I've used the exact same code but no matter what image I enter, it always returns 3 as its guess, do you know why that could be?

NiklasUnruh
Автор

10:57 : the 5 is the answer the AI give or it's just his analysis ? Like a true answer would be a printed : "5"
edit : Ok so my run has false, the test image is 3 and he answer "tensor(2)" ... :c
redit : i set up to epochs 15 and try différent 'lr' but the IA still wrong everytime ...

Fine_Mouche
Автор

That's amazing, but i suppose they'll take it down soon bcs people will use it to mine crypto

vnldces