Apple M3 Max MLX beats RTX4090m

preview_player
Показать описание

Apple MacBook Pro with the M3 Max chip is even more capable in Machine Learning workflows now that MLX Framework is out. Here I test it against the nVidia RTX 4090 laptop version in one of my typical workflows - speech to text.

Use COUPON: ZISKIND10

🛒 Gear Links 🛒

🎥 Related Videos 🎥

🛠️Code🛠️

— — — — — — — — —

❤️ SUBSCRIBE TO MY YOUTUBE CHANNEL 📺

— — — — — — — — —

📱LET'S CONNECT ON SOCIAL MEDIA

— — — — — — — — —

#m3max #m2max #machinelearning
Рекомендации по теме
Комментарии
Автор

Awesome video! I would love to see more LLM or other DL architecures benchmarked between the M3 Max and the RTX 4090m laptop. A definitive video saying the M3 Max is X% better/worse than the 4090m for RNN, CNN, or transformer architecutres would be a gold mine for other AI/ML devs like me!

collinpurcell
Автор

Watched tens of your videos before upgrading from my old i9 MacBook Pro to my M3 Max MacBook Pro.

Nowadays I still watch your videos (even if I already have an M3 MacBook) because I like the way you make your content – pragmatism, tone of voice, length and cuts.

👏

roccellarocks
Автор

No it’s not faster. You’re not using fast whisper. Also python implementation absolutely uses the gpu. Set device to mps

asjsjsienxjsks
Автор

Found your channel from Fireship vid ~2ya. Awesome stuff!

randysavage
Автор

Wow, exciting results! I was always optimistic that Apple's unified memory architecture would pay dividends in certain workloads, and MLX appears to be effectively exploiting that paradigm shift.

Keep up the good work! Love the channel!

stephensiemonsma
Автор

Alex, I found your channel when researching for my M3 max laptop purchase. I love your benchmark methodology, but also wish I could copy some of your workflows. If you added a code repository to your membership, I would join!

markclayton
Автор

Thank you! What is the correct way of comparing my current AMD Radeon Pro 5300M 4 GB (MacBook Pro 2019) to an Apple M silicons? In terms of a MacBook gaming experience. I am playing a game from time to time and would like to make sure that a M chip won't take it away from me :)

mr_ww
Автор

Hey, amaizing video very useful, 5:18 - i am interesting to see the video how to install whisper with support of GPU etc.

Itcornerbg
Автор

Hmm this difference mayo is from ram/vram sharing on arm Macs.

ARM GPU can use up to 75% of ram as vram. I don’t know that you’ve 64/96/128 RAM versions, but in all cases will be more vram than 20gb in 4090.

Dadgrammer
Автор

RTX 4090m is equivalent to the desktop RTX 3080 btw.

johnkost
Автор

7:23 Vision Pro Light Seal Cushion spotted 👀

Anshulb
Автор

Can you make iPad and iPhone app versions of these tests so we can benchmark m4 on iPad in couple of days?

skyhawk
Автор

WSL uses hyperv, there is no way around it.

MSI laptops are always noisy. If you need a powerful and less noisy windows laptop then Lenovo Legion 9i is a better choice

PratimGhosh
Автор

Why don't you run Linux on the 4090 PC

donaldadugbe
Автор

"PC Master Race" on suicide watch !! 😂
(and yes, it's quite probably the M-series chips' Unified Memory architecture that's making the difference here)

MrLocsei
Автор

Part of Apple's long game here is to absolutely dominate the mobile market in every way, and part of that domination is going to require robust machine learning capabilities and speed even for small models that are better suited for mobile uses of machine learning applications. They make their machines able to run small models insanely fast and that's where they're going to have a huge edge in the future

rondobrondo
Автор

too bad the proprietary silicon is anchored to the pos company which is apple, I don't want to spend 800 dollars on an extra 64gb of memory.

yesyes-ompo
Автор

Two MacBook Pros died after 14 months. If I could buy. a new one every year, that would be just GREAT.
8GB of RAM is not enough but Apple figures that profits are better than selling a computer with enough memory to do the job. "Job" - does that remind you of someone??? Too bad we are Cooked.

rupertchappelle
Автор

Nvidia seriously needs to up the game with VRAM capacity. But why would they, when their competitors are as useless as Intel and AMD.

divyanshbhutra
Автор

Want to watch the stable diffusion one. Want to meet up? I'm in DMV

chrisa
visit shbcf.ru