Data - Deep Learning and Neural Networks with Python and Pytorch p.2

preview_player
Показать описание
So now that you know the basics of what Pytorch is, let's apply it using a basic neural network example. The very first thing we have to consider is our data.

#pytorch #deeplearning #machinelearning
Рекомендации по теме
Комментарии
Автор

DataLoader actually returns a LIST of x tensors and y tensors, not a tensor of tensors (this would be impossible unless the x and y dimensions were the same, of course).

jphoward
Автор

Your 30 min videos feels like 5 min while my 1 hr lecture feels like 3 hrs of pure boredom.

jasonmomo
Автор

This is why you’re amazing. You don’t just teach the framework. You teach the logic, the intuition, the best practices, and the philosophy behind deep learning. Thanks.

judedavis
Автор

"You came back!"
Yeah, no shit, you beautiful human

Proprogrammer
Автор

One video a day? This is madness!
I love it

diogoverde
Автор

Loving this series. And please do a neural network from scratch series after this

aryanbhatia
Автор

I am gonna use deep learning and pytorch for an NLP-related project that I am in. I have been looking for tutorials and there are thousands of them available out there! After I got bored with two of them, gave a try to yours, liked it very much! I didn't get bored, you explained details very well with fine wording. I even *finally* started using jupyter thanks to this tutorial. Now, I am going to the third episode!

aysesalihasunar
Автор

Thanks a lot for posting the video series in an easy to understand manner with lots of explanation! Many thanks.

krishnam
Автор

Instead of manually typing out the dictionary for counter_dict you could also do:


counter_dict = {x:0 for x in range(10)}

gassd
Автор

You uploaded it fast! I’m sooo happyyy! Next one sooon please!!

davidserero
Автор

I wanted to express my gratitude for sharing that incredibly insightful and valuable deep learning tutorial on neural networks. It has been a game-changer in my understanding of the topic, and I truly appreciate your informal guidance. Thank you!

kadaliakshay
Автор

having a million subs is cool.
having a million subs who are following you to learn technical & niche content is very cool.

wayfaring.stranger
Автор

The 'Base 8' as you say is because a byte has 8 bits of data. When working directly with hardware (particularly micro controllers) the most efficient way to execute or parse data is dependant on the processor itself though it will always be in multiples of 8. If that makes sense.... p.s. love the channel!!!

theplayingofgames
Автор

Man thanks for all of this. Usually I can't follow tutorials but with you its so easy. Keep up the good work :)

olee_
Автор

I am totally new to Deep Learning and Pytorch. Your explanations are awesome, I understand better now! :D Thanks so much!!

gedance
Автор

Just finished all the 8 videos in this playlist. Loved it. Hope you make more of these pytorch videos.

arindam
Автор

As Harrison mentioned at 25:29 If anyone is wondering how to create counter_dict using Counter, then here is the code.
from collections import Counter
ys = [x[1] for x in train]
print(Counter(ys))
print(dict(Counter(ys)) #if you want dictionary object

RajatBhatt
Автор

Jeez man, you're a freakin hero! looking forward for the next video of this series :)

hussamsoufi
Автор

17:26 Correction: data is a list - the first element is a tensor of tensors (the pixel information of 10 samples) while the second element is a tensor of integers representing the labels (the corresponding digits) of those 10 sample images.

bharathhegde
Автор

Great vid Harrison thanks. Regarding choosing batch size, yes typically the larger the batch size, the faster your model converges. But setting your batch size too large will result in an out_of_memory error. So it's important to calculate the largest batch size possible without exceeding your CUDA memory limit.


But one thing I've never gotten a straight answer is this:

"What's the largest batch size you can choose, as a function of
1) your particular GPU memory size,
2) the size (MB, KB, etc) of your training examples (maybe GBs if training very large images), and
3) the size of your model parameters


all three of these must fit inside your GPU memory. So in theory there should be a formula to calculate the largest possible batch size. Something like GPU_mem_size - model_weights_params_size = remaining memory for training samples. Then take that value and divide it by the typical size of a single training sample (image file or whatever). The result of this division, call it n, is theoretically the largest batch size you can fit in your GPU. Then you would probably round down to the nearest multiple of 8.


I'm probably leaving something out of this equation, but that's the general idea.


Any thoughts on a straightforward approach to calculating this theoretical max batch size? Thanks Harrison (or anyone else who happens to know the answer)

RedShipsofSpainAgain
welcome to shbcf.ru