Can my Laptop run Meta's Llama 3, WizardLM 2, DBRX, Mixtral 8x22b, and Command R+?

preview_player
Показать описание
Can my Laptop run Llama 3, Wizard LM2, DBRX, Mixtral 8x22b, and Command R+ ?

In this video, I look at the models that were made available by Ollama this week and see if they can run on my Apple M3 MacBook Pro with 64GB of RAM. Specifically, I look at Meta's Llama3 (70b and 8b), Mistral's Mixtral 8x22b, Databricks' DBRX, Cohere's Command R+ and Microsoft's WizardLM2 (8x22b and 7b).

This lets you try out different models, and even use uncensored models.

👍 Please like if you found this video helpful, and subscribe to stay updated with my latest tutorials. 🔔

🔖 Chapters:
00:00 Intro
02:25 DBRX
04:50 WizardLM2 8x22b
07:08 Command R+
10:55 Mixtral 8x22b
12:57 Llama3 70b
15:36 Llama3 8b
17:39 WizardLM2 7b
19:13 Final thoughts

🔗 Video links:

🐍 More Vincent Codes Finance:

#llama3 #mixtral #dbrx #command-r-plus #wizardlm2 #chatgpt #llm #largelanguagemodels #ollama #openwebui #gpt #opensource #cohere #databricks #opensourceai #llama2 #mistral #bigdata #research #researchtips #professor #datascience #dataanalytics #dataanalysis #uncensored #private #mac #macbookpro #m3 #claude #anthropic
Рекомендации по теме
Комментарии
Автор

I like your videos! Why do you think your Mac is using CPU? I was able to run some of the larger models with no real issues. Timing-wise was closer to 10-15s to initialize for me

jacobeconomy
Автор

Can you test how well the 8k context size of llama 3 works?

patrickwasp
Автор

Which ollama version were you running for all of these models?

DannyCatacora