When M1 DESTROYS a RTX card for Machine Learning | MacBook Pro vs Dell XPS 15

preview_player
Показать описание
Testing the M1 Max GPU with a machine learning training session and comparing it to a nVidia RTX 3050ti and RTX 3070. Cases where Apple Silicon might be better than a discrete graphics card on a laptop.

#m1 #m1max #ml #pytorch #rtx3070 #macbookpro #intel12thgen #rtx3050ti #dellxps15

ML code:

(Take 15% off any premium NativeScript course by using the coupon code YT2020)

— — — — — — — — —

❤️ SUBSCRIBE TO MY YOUTUBE CHANNEL 📺

— — — — — — — — —

🏫 FREE COURSES

— — — — — — — — —

📱LET'S CONNECT ON SOCIAL MEDIA

Рекомендации по теме
Комментарии
Автор

Having access to 64gb of GPU memory is just insane at this price. Theoretically you can even train large GAN models on this. Sure, it will take a very long time, but the fact that you can still do it at that price and with this efficiency is just madness. The unified approach is just brilliant and it seems that both intel and AMD are slowly moving towards this path.

georgioszampoukis
Автор

I trained the VGG16 on a Fully loaded Mac Pro 14" 2023 (M2 max / 96Go of UM) in 16.65 min T.time

youneslaidoudi
Автор

I would still like to see a speed comparison with a lower batch size. Because memory is just one aspect of a GPU. If it is still slower then it's not better.

hugobarros
Автор

this comparison doesn't even make sense. You are comparing a 5000$ laptop to two laptops which don't cost a fraction of what this 64GB Ram Monster cost

hozayfakhleef
Автор

Thanks a lot Alex for your videos. ... bez of your videos I purchased macbook m1 based, which has made my work really smooth. Now I can use VS code with many other usefull chrome extensions simultaneously, making my web development work much easier. I think Apple should keep in their marketing team 😀😀. You are doing better their whole expensive marketing campaign. I had no reason to purchase Macbook then I saw your videos which really helped me out.

shashank
Автор

It would be interesting to see the performance with limited batch size on the RTX GPUs versus the M1 max.

gabrigamerskyrim
Автор

But in reality every production grade ML task is being done in a distributed manner on the cloud using spark. Because it's impossible to fit realtime data on a single computer storage. So it doesn't matter which computer you have locally apple or non-apple it is only used for initial development and prototypes.

noone-dcuh
Автор

Great video! Currently, Im actually quite interested how well the m1(base m1)/m2 chip would perform in basic machine learning tasks implemented in R.

keancabigao
Автор

Now make sense shared memory for GPU, good comment ;)

wynegs.rhuntar
Автор

This isnt actually the case if your data loaders are memory intensive (audio loading, etc). Ultimately youll want your own set of dedicated RAM so that your CPU isnt bottlenecked

kevinsasso
Автор

Would love to see the other longer ML comparisons, thank you!

PedroTeixeira
Автор

I'm interested in seeing your personal project benchmarked across systems! But some friendly advice: I think you should be consistent with your use of significant digits across measurements. 0.1m doesn't mean the same thing as 0.10m.

thedownwardmachine
Автор

It’s fine for learning, but the vram limitations when you start dealing with production quality algorithms will make you offload your workloads to something that has multiple A100s. Training time on rigs with dual 3090s is something worth taking a look at how gpu ram is being loaded.

somebrains
Автор

CIFAR10 is considered a small test, but for a youtube video, it's large. Truly large models have datasets over 10 million images :)

On a NVidia video card with 8G or less, you really have to keep the batch sizes small to train with CIFAR10 dataset. With CIFAR100 dataset, you have to decrease batch size to avoid running out of memory. You can also change your model in Tensorflow to use mixed precision.

woolfel
Автор

Now let's try comparing a rtx 4090 desktop with this $5000 hunk of metal that is sooo great at machine learning bruh if I had $5000 laying around i would get a desktop and use it as a remote machine and buy a thin and light windows laptop for like $300

shashank
Автор

I have multiple machines for different purposes. Two things i do absolutely require a mac so it's not even a question for me. iOS development with xcode and final cut pro.

csmaca
Автор

If PyTouch can use those Neural Engines, it will be much faster.
Now, you can only do that in Swift I guess…

stevenhe
Автор

yes please make a video on that !! Can’t wait to install the pytorch metal. :)

planetnicky
Автор

Loved the video! Please if you can compare rtx 3080ti mobile with M1 Max or M1 Pro. That would be a good comparison considering those rtx cards have more memory

MHamzaMughal
Автор

Nice. Some of the Pytorch_light code doesn't seem to run, but the other benchmarks do run. I'm on the 16GB MacMini, and cifar10 runs. I'm up to just under 16GB being used, and it's not grabbing a bunch of swap. It may take forever to finish, but I think it will get to the end. I'll leave it running for a half-hour or so. Two years ago, I bought a K80 because of running out of memory, but the power draw is significant, and mostly I use models and don't train; so I suspect this M1 will be good enough.

dr.mikeybee