How to Install and test LLaMA 3 Locally [2024]

preview_player
Показать описание
After the release of Llama3 i thought i should make a view to walk anyone who is looking to use it locally. i hope this video helps:)

Related links:

- - - - - - - - - - - - - - - - - - - - - -

Follow us on social networks:

---

#ai #artificialintelligence #llama3 #metaai #install #macos #machine #machinelearning #aitools #programming #softwaredeveloper #softwareengineer #webdeveloper #developer #iosdeveloper #mobiledevelopment #coding #coder #javascript #developer #computerscience #computersciencestudent #100daysofcode #html #css #programmer #vue #npmpackage #npm #package #CodeNewbies #Code_with_bro #code_withbro #youtubechannel #youtube #youtuber #youtubers #subscribe #youtubevideos #sub #youtubevideo #like #instagram #follow #video #vlog #subscribetomychannel #gaming #music #explorepage #love #smallyoutuber #vlogger #youtubegaming #instagood #llms #youtubecommunity #likes #explore #youtubelife #youtubecreator #ps #bhfyp #fotiecodes
Рекомендации по теме
Комментарии
Автор

Model variants

Instruct is fine-tuned for chat/dialogue use cases.
Example: ollama run llama3 ollama run llama3:70b

Pre-trained is the base model.
Example: ollama run llama3:text ollama run llama3:70b-text

codewithbro
Автор

Nice video! Can you do one on how to set up a local database out of pdf files? So the A.I would be able to search that pdfs and answer questions regarding the content

Knuhben
Автор

I love how in the intro, we're being told about a new AI, by an old one.

SevScout
Автор

llama 3 in action:

>>> How many liters of water per minute can a Dutch windmill pump out?
That's an interesting question!

The answer depends on the specific design and size of the windmill. However, I can give you some general
information.

Traditionally, Dutch windmills are designed to pump water from shallow sources, such as polders or wells, for
irrigation purposes. The capacity of these windmills varies greatly, but a typical small to medium-sized windmill
might be able to pump around 0.5 to 2 liters per minute (L/min).

Some larger industrial-scale windpumps can pump much more, up to 10-20 L/min or even more, depending on the
turbine design and the pressure head of the water.

Really?

podunkman
Автор

what about hugginggface chat ui?
I recently installed ollama open webui, worked fine, after 1day I started getting some error in docker getting restarted because of some cuda error.

I don't want to trust it now because I cannot find a fix, although reinstalling fixed it.

And on terminal it was always working and working faster than web ui

anshulsingh
Автор

How can I run the original not_quantized version of llama 3-8B(It is almost 15gb)?

gallyyouko
Автор

how do you add your models I have ollma set up in my ps command line but when I used the UI it didnt show up and Idk how to upload them please help!

BrandWorksApp
Автор

what if the model only uses CPU resources? it's very slow, it would be great to speed it up with a GPU

LordeSam
Автор

How can I run the larger version of the model?

SirDragonClaw
Автор

I didn't really get the part on docker, can you help me ?

Baly
Автор

Is there a way to run it on custom developed UI? Can someone please guide me as to how?

dylandsilva
Автор

can i run this offline with NO DATA being sent for it to respond to user queries?

mattpears
Автор

How to change the location of the insallation and the location of the download of the model?

GiochiamoinsiemeadAndrydex
Автор

why do all of these dont have a proper graphical interface... i hate having a black screen to stare at

PedroHenriquePS
Автор

I'm glad that my PC handles 70B with 16GB VRAM and 64GB of RAM. I wonder what the absolute minimum requirements are for 400B.

cptcrogge
Автор

stop click baiting, the 400b is not even out

benbork
Автор

Advice: use powershell and not git bash 🤣

Ligthus
Автор

content is fine, but your explanation wise

viniciusmelo