How to download and run Llama 3.2 Locally!!!

preview_player
Показать описание
This tutorial showcases how to run the latest Meta AI model Llama 3.2 Locally on CPU or Laptops using Llama Cpp!!!

❤️ If you want to support the channel ❤️
Support here:

🧭 Follow me on 🧭
Рекомендации по теме
Комментарии
Автор

for those using ollama, the small text models are already available in the ollama libary, but not the larger vision models (yet).

juliandarley
Автор

Thanks for the small model demo. It's more practical. I think we are waiting for the 'how blind is it' review 😅

KevinKreger
Автор

It's sad to know someone abused you in Youtube comment.

ViewpointsVortex
Автор

Whats the difference between using ollama and pulling it ?

lakshyakumarpandey
Автор

Vram for this model? Can run on cpu instead a gpu?

marconwps
Автор

Hey bro are you aware of LM studio which is used for downloading and running open source models. Is it good

Aj-cmcp
Автор

sir, is the llama.spp server makeing the model small using 8INT or similar feature or we need to do something to do that. As by default i can see it is on FP32

divyapratap
Автор

Does anyone know the RAM requirements for running the model?

protofaze
Автор

I'm as racist as they get, but you dont deserve any hate, you're 100% friend material ! keep up the great videos. o7

UCsktlulEBEebvBBOuDQ
Автор

Bro wtf, your last vid was posted 51 minutes ago

TheRealUsername