How To Use Meta Llama3 With Huggingface And Ollama

preview_player
Показать описание
Llama3 is available now in huggingface,kaggle and with ollama.
-------------------------------------------------------------------------------------------------
Support me by joining membership so that I can upload these kind of videos
-----------------------------------------------------------------------------------

►Data Science Projects:

►Learn In One Tutorials

End To End RAG LLM APP Using LlamaIndex And OpenAI- Indexing And Querying Multiple Pdf's

►Learn In a Week Playlist

---------------------------------------------------------------------------------------------------
My Recording Gear
Рекомендации по теме
Комментарии
Автор

Instead of showing all new models it is better to implement a single open source llm for all use cases including fine tuning. At the same time it is better to make an end to end project with opensource llm.

sanadasaradha
Автор

Hi Krish, thank you for great knowledge you are sharing, I want to run ollama on aws sagemaker, do you know or have any video regarding doing that, I already saw your video to install and work locally, I do not know how to do on the cloud.

saharyarmohammadtoosky
Автор

When using the Llama3 model, sometimes it gives me an incomplete answer. What can I do to avoid incompleteness when generating responses from the Serverless Inference API?

nitinjain
Автор

Hi, there will be 10 billion models coming in future and so on..

girishkumar
Автор

Hi Krish,
Firstly, thanks to your consistent efforts to keep us updated and learn the latest techs in the realm of Data Science.
How can I sponsor you on Git Hub and earn a badge?

KunalDixitEdukraft
Автор

Sir, please make a llm video on how to train llm models on custom data.

happyhours.
Автор

Hii Krish I too tried the llama3 in ollama response is slow but in your machine the response is fast you are using GPU then what GPU you using ?

mhemanthkmr
Автор

Is there a way I can use int8 (select specific quantization) for llama3.1 using ollama

theyoungitscholar
Автор

Please make a Video on how to finetune LLaMA! I would appreciate it :) Love your videos btw

janneskleinau
Автор

New models will keep mushrooming every day. I think now videos should focus on more end to end projects using these models. Not just sentiment analysis or language translation or text summarization. Some real life project end to end.

KumR
Автор

Hello sir, I have a doubt. Is there any platform to find the latest research papers to read? Also, how can I stay updated about the latest developments in the fields of general AI and AI?

itzmeakash
Автор

This can be fined tune in the same way right?

OmSingh-ngnp
Автор

Take look at this demo:

>>> How many liters of water per minute can a Dutch windmill pump out?
That's an interesting question!

The answer depends on the specific design and size of the windmill. However, I can give you some general
information.

Traditionally, Dutch windmills are designed to pump water from shallow sources, such as polders or wells, for
irrigation purposes. The capacity of these windmills varies greatly, but a typical small to medium-sized windmill
might be able to pump around 0.5 to 2 liters per minute (L/min).

Some larger industrial-scale windpumps can pump much more, up to 10-20 L/min or even more, depending on the
turbine design and the pressure head of the water.

Really? What a sh****

podunkman
Автор

Sir how and from where we will get the api key

ChemFam.
Автор

my command prompt is raising error saying Ollama not found? any help guys?

kshitijnishant
Автор

Dude, don't just copy from hugging face and make a video... try to implement these models on some problem like NER

rajsharma-bdsl
Автор

tanks krish .
please answer my email ❤

mohsenghafari
Автор

Bhaiya how do I convert Hindi video into Hindi text, please 🙏 reply much needed ❤

JourneyWithMystics
Автор

where will those downloaded model files get saved?

nishant
Автор

Hey Krish, It will be more helpful if you can take a session on how to use an offline LLM on a custom dataset for QnA

r
visit shbcf.ru