How to Download Llama 3 Models (8 Easy Ways to access Llama-3)!!!!

preview_player
Показать описание
🔗 Links 🔗

This tutorial shows how to download the newly released Meta AI's Llama 3 models.

you'll learn to download and use the Llama 3 models locally and also on free websites!

❤️ If you want to support the channel ❤️
Support here:

🧭 Follow me on 🧭
Рекомендации по теме
Комментарии
Автор

great work getting these videos up in such short time! really helpful!

rodvik
Автор

Thank you for providing so many different ways to access Llama 3. I didn't even know half of them before watching the video.

NoTimeWaste
Автор

For reference: I have 12gb VRAM and 32gb RAM and I can run the llama3 70b 4bit quant (barely by splitting the ram and VRAM so that 11gb vram used and 31gb ram used). It takes me a minute for each word but it works. I recommend trying 3bit quant or sticking to llama3 8b unless you have patience or better hardware :)

lexuscrow
Автор

please make videos how we can use these models, test the models for different scenarios or may be using on web apps. there are no videos on these on youtube

PRFKCT
Автор

I have been trying the image generation on this, and this is substantially better and fast.
In future, if possible can you make a tutorial on LLama 3 with images?

snehitvaddi
Автор

Bro you are the GOAT
I was so confused when reading the README

johnsaxz
Автор

When u run llama2 locally using ollama which gpu is adviceable.

sajeebhussain
Автор

Great video. Yes can you please create a Colab example for Lama 3?

marcoaerlic
Автор

your right perplexity labs has llama3 running at it fast

Edoras
Автор

i had a very bad experience downloading it, i have macbook air m2 8gb and it lagged it so hard it was like using a cheap laptop. also i had installed llama3 which provided false code when i asked a program for prime numbers and when i talked about ollama it told me to "keep things respectful and not use any vulgur languages"

YG-wkqm
Автор

When I use llama 3 8B on ollama or LM Studio, it is much dumber than on OpenRouter. Even after resetting all parameters to factory and loading the llama 3 preset. Even with the full non-quantized 8-bit version on LM studio.

emanuelec
Автор

How can I download weights lnto databricks dbfs

prashanthkolaneru
Автор

I'm so impatient for Groq to host the mode, soon we will see blazing fast high quality agent working together

enekxtw
Автор

Not to forget RAGNA Desktop App. Even though it’s only for Mac available yet ;)

svenst
Автор

install ollama, open cmd, type ollama run llama3 .. done

NeoIntelGore
Автор

I only have 8 gb ram. Are those 2 or 3 bit quantized versions any good? because i can run only those.

nikhilmish
Автор

Ollama run llama3 will only have a 2k context window?

patrickwasp
Автор

Bro, how did you made the video so fast lol

TheGamingAlong
Автор

Bro what specs are needed to run these models?

What if my laptop doesn't have a GPU?

Gregadori
Автор

Nousresearch must have removed those LLM. No accessible or seen.

AnthonyTrivett