Meta's LLAMA 3 with Hugging Face - Hands-on Guide | Generative AI | LLAMA 3 | LLM

preview_player
Показать описание
Dive into the future of generative AI with our detailed guide on how to access Meta's LLAMA 3 using Hugging Face. This video provides a step-by-step walkthrough to help you harness the power of one of the most advanced language models available today.

Hello everyone! I am setting up a donation campaign for my YouTube Channel. If you like my videos and wish to support me financially, you can donate through the following means:

(No donation is small. Every penny counts)
Thanks in advance!

I am making a "Hands-on Machine Learning Course with Python" in YouTube. I'll be posting 3 videos per week: Monday Evening; Wednesday Evening; Friday Evening.

Рекомендации по теме
Комментарии
Автор

My first hands-on, programming tutorial that I’ve walked through in 20 years. Very well done (followed on LinkedIN). I had trouble with the Pipeline, max tokenizer return. I’ll troubleshoot more later. Ran after I removed that Pipeline attribute. Well done - thanks for your time making these videos. Excellent introduction/hands on to AI programming.

Patrickh
Автор

Bro. In machine learning by using svm or lr, if i give any input it shows reults as 1. It doesn't show 0.And it shows some warning messages like you reached maximum iteration . Limit reached. What can I do for this bro..

dnpmjfh
Автор

I have faced an issue: RuntimeError: No GPU found. A GPU is needed for quantization.

How to use Few-shots in meta-llama/Meta-Llama-3-8B and Dataset in .csv file?

MScProject-un
Автор

Hi, thank you very much and really appreciate the knowledge you are sharing through this video. Would it possible to know, how to fine-tune a LLM in q&a format dataset please?

adeeshadissanayaka
Автор

What if I want to add functional calling and retrieval?
Will the pipeline stay text generator?

amventures
Автор

is llama3 have filters too ? like nfsw conversasion or any thing crime

bargajehaha
Автор

is it possible to do this without having colab pro?

vaibhavdhanuka
Автор

Hello @Siddhardhan

It was a very informative video and helped me a lot to understand the entire code. Can you please make a video on How can we fine tune the LLama 3 using RAG while the code structure will be the same as this video (It will help to understand the entire process at a time).

ruhultusar
Автор

Can you make a video on fine tuning using hugging face models on custom set or kaggle dataset ? Either inage or nlp

HK-Sepuri
Автор

Sir can u upload any project for genai other than RAG ... Or any idea for any other project than RAG

sam-uwgf
Автор

First off, Great content. Please make a new tutorial with gradio and explain how can we integrate it api .. !!!

eswargowtham
Автор

"Your request to access this repo has been successfully submitted, and is pending a review from the repo's authors."
How much time will it take??

edit: 10 minutes :D

datacleaner
Автор

I'm sorry, can you further explain the role of tokenizer.pad_token = tokenizer.eos_token? Script only runs when this line is removed.

jeremycollins