How To Run Llama 3 8B, 70B Models On Your Laptop (Free)

preview_player
Показать описание

Unlock the power of AI right from your laptop with this comprehensive tutorial on how to set up and run Meta's latest LLaMA models (8B and 70B versions). We will use Ollama to run these models locally on your laptop and that too for free.

What You'll Learn:
- An overview of LLaMA models and their capabilities.
- Step-by-step instructions on setting up your system for LLaMA 3.
- Tips on optimizing performance for both the 8B and 70B models.
- Troubleshooting common issues to ensure a smooth operation.

#LLaMA3 #MetaAI #AITutorial #MachineLearning #Coding #TechTutorial
Рекомендации по теме
Комментарии
Автор

Informative and straight to the point, thank you!

PJ-higz
Автор

most underrated channel. you deserve way more dude!☺

sphansel
Автор

Thank you for the guide, great stuff! Just a heads up, there's a slight error in the command table within the written guide. The command for the 70B should be `ollama run llama3:70b` instead of `ollama run llama3:8b`

mustafamohsen
Автор

Thanks for this insightful video.
Is it possible to install it on a local server and be used on laptops through the WiFi net?
Is it possible to create specialized AI Assistants, let say for internet search, wrting... Or even trained with local data?
Thanks in advance.

m.bouanane
Автор

nice guide with easy written instruction thanks

TimeTalesTT
Автор

this might be dumb but is it compatible to be used with intel's ai boost npu instead of using the graphics card?

kalpratama
Автор

How do i do it on a virtual machine so that I can host a model on cloud for business usecase?

PrasanNH
Автор

Hey i want to use the ollama version in my jupyter notebook, and just like we use the other models through api, i want to use it in my notebook for doing some continuous task, so how to do that? and also running it on a gpu would be much faster, just like we use models from the transformers, but i don't want to use transformers, but the model which i have loaded form the ollama, just like you did it in the video, bcuz i think that will save time and downloads also, can we do that?

gamersdepo
Автор

I've installed the 70B model on my desktop which has 64GB of memory. But it is running super slow. Any tips? Thanks!

Muzick
Автор

Hello, What would be recommended hardware specs to run Llama 3 70b at good performance for multiple users(~5 users).

nqaiser
Автор

I'm jealous of your internet speed bro :(

thesattary
Автор

forgive me I am new to coding, but could i get it running outside the terminal so it can have a nice GUI

ElcoolMo
Автор

Does it have an endpoint I can access from localhost so I can make my own html interface?

qtUnluckyThreshh
Автор

doesn't it have an API that we can use instead of installing it in our own pc's

juritronics
Автор

Can I run 8B on my 8GB memory. Will it work ? I dont mind it being slow

hunterking