Steps By Step Tutorial To Fine Tune LLAMA 2 With Custom Dataset Using LoRA And QLoRA Techniques

preview_player
Показать описание
In thsi video we will be dicussing about how we can fien tune LLAMA 2 model with custom dataset using parameter efficient Transfer Learning using LoRA :Low-Rank Adaptation of Large Language Models.
-------------------------------------------------------------------------------------------------
Support me by joining membership so that I can upload these kind of videos
-----------------------------------------------------------------------------------

►Data Science Projects:

►Learn In One Tutorials

End To End RAG LLM APP Using LlamaIndex And OpenAI- Indexing And Querying Multiple Pdf's

►Learn In a Week Playlist

---------------------------------------------------------------------------------------------------
My Recording Gear
Рекомендации по теме
Комментарии
Автор

Amazing !!! I have red the book -"Generative AI on AWS" today and learnt all the concepts of quantization, PEFT, LoRA, QLoRA and you have uploaded the video for the same!! Thanks a lot!!

kalyandey
Автор

Please make vidoes on theoretical concepts such as LLM model internals, Mixture of Experts, RLHF and so on.

bluelightning
Автор

I started my fine tuning journeys,
hope it would be something interesting

shakilkhan
Автор

Thank you for the video. The main issue I face from these tutorials is the custom dataset preparation part. Here also the dataset is loaded from HF.
I have a tabular NLP classification dataset in my local. Let's say sentiment analysis dataset.
How should I prepare the dataset and run the llm finetuning locally?
Thank you again for this tutorial. I hope you'll show us the implementation of actual local, own dataset finetuning.
Also, there's a paper called TabLLM, which uses LLM on numeric tabular datasets. Making a video on that one would be so much helpful regarding implementing it on the custom private dataset. Thank you again, and keep bringing good content as always <3

nasiksami
Автор

Amazing video Krish, Can you also make a video on how to build RAG based LLM for Q&A over multiple documents where we can actually compare between two or more documents.

DataDorz
Автор

Can you please upload videos indepth of how different prompting techniques like chain of thought, self consistency, knowledge generation etc were practically used with which the outputs of the models based on use cases are getting improved

avanthikar
Автор

This guy's excitement for NLP is adorable but man needs to get out more, the real world is calling!

pedroluisbroca
Автор

Actually this the video i want to ask you but you read my mind before I ask that why I am saying now Krish sir is mind reader

sanadasaradha
Автор

Make theocratical videos on PEFT, LoRA, QLoRA, how quantization work, how quantize a model and Mixture of experts works

Abdullah_kwl
Автор

for tuning this model the format of dataset must be same or i may use any others format too such as row with text only without <s> and [INST] or if labelled data are required then i use csv with two rows for prompt and answer??

bikramsubedi
Автор

Please also make a video on mathematical concepts and the intuition behind the LLMs.
Already subscribed and liked the video, as you are doing an amazing job.

saqibmumtaz
Автор

May I ask a question? I used your code to fine-tune llama2 7b-chat on my data and the code works perfectly, but for some reason my new LLM can't predict the EOS token. So, every time I ask the model to generate text, it will generate tokens until it reaches the max_length. I think there is something wrong with the way Lora is using this EOS token. Do you have any idea how to fix this?

By the way, amazing video. Thanks.

ruiteixeira
Автор

Very Amazing video. Please make vidoes using json or csv file as a dataset

paul-andrejacques
Автор

Yes, please make a theoretical video as well on all open source llms

sajidchoudhary
Автор

thanks for the video, it would be better if u can show documentation side by side with ur testing plz

ramankhanna
Автор

Amazing how did you know all this sir😢😢😢😢

akandesoji
Автор

Mistral's medium posts helped me a ton, then found enterprise for hands on work

AngelBautistaMartinez
Автор

Can we do fine tuning on unsupervised data?

flyingsnow
Автор

Wow thanks for breaking it down step by step.

zulaysolis
Автор

Krish sir, can you tell me for fine tunning llama 3 why most of the people are using alpaca format, is this a straight away rule, or it just like it works well with alpaca format for fine-tunning.

AbhishekChaudhary-yf