M4 Mac Mini vs AI mini PC

preview_player
Показать описание
The M4 Pro Mac Mini vs Ryzen AI 9 370HX

Use COUPON: ZISKIND10

🛒 Gear Links 🛒

🎥 Related Videos 🎥

* 🛠️ Mini PC portable setup -
* 🛠️ Dev setup on Mac -

— — — — — — — — —

❤️ SUBSCRIBE TO MY YOUTUBE CHANNEL 📺

— — — — — — — — —

Join this channel to get access to perks:

— — — — — — — — —

#minipcs #macmini #m4pro
Рекомендации по теме
Комментарии
Автор

Hi, what are you using to split the display inputs?

yusuffaizullin
Автор

Awesome channel. How are you able to use that display for two machines?

meiniesmith
Автор

I’ve been waiting on your video for the Mac mini spec recommendation for developers. I’m a beginner looking to buy a Mac mini to learn coding and looking to keep it for professional use in the future as well. I’ve been interested in web3 development as well as a 3D web app using three.js.

Visdemhvemduer
Автор

Alex, how did the base M4 Mac Mini do?

seantyler
Автор

What model from ollama are you running? How many parameters? Also. Great videos. Love them.

juanman
Автор

Did you use Ollama with GPU or CPU? Cause afaik you have to specify what to use (using a terminal argument), or use LMStudio where you can choose.

SahilP
Автор

Hi, awesome channel 😊
I’m very much interested in local LLMs, especially M4 pro vs. M4 max to see if it’s worth waiting for the M4 Studio or just take the mini.

MichaelKörner
Автор

So you got a keychron v5 max nice. Alex how many keybaord you have right now?

abhiranjan
Автор

I saw a video a moment ago where LTT run the lama 3.1 405B model on the Mac mini with 16gb infinity ram... And it was quite good! Could you do a check if these was real? Because I can't believe it O_o

patrickr
Автор

What would be the performance difference if it was Linux on the x86 machine?

AKagNA
Автор

The Mac mini could be even faster with mlx-lm instead of Ollama

thcleaner
Автор

Wish you had done this test with the M4 Mac mini, not the pro.

arnobchowdhury
Автор

Where is the Schwartzenegger to press the enter keys???

codefallacy
Автор

Irrelevant test without Schwarzenegger.

RealDKuz
Автор

ollama runs a bit different on each platform. Using llamma cpp directly would produce more consistent comparable results on both.

Supposedly that Ryzen pc does 80 TOPS when fully utilizing cpu+gpu+npu, while m4 does 38 TOPS. My guess is that ollama is relying on precompiled binaries that are missing some processor specific features on the Ryzen chipset.

differentmoves
Автор

Ollama runs on apple GPU ? I thought ollama only runs on apple cpu ?

ps
Автор

the mac mini is basically a GPU that can use all main memory. Still not faster than a 3090.

seanmcdirmid
Автор

got 138t/s on my windows computer from 1.5 years ago, nice

Fran-kcgu
Автор

Arm the best 🤭, but not in windows 😁😁

wakathepublic
Автор

What’s the power draw difference? Also, you needed a Schwarzenegger for this. I am not sure these results are accurate.

lwwells