Generative AI Fine Tuning LLM Models Crash Course

preview_player
Показать описание
This video is a crash course on understanding how finetuning on LLM models can be performed uing QLORA,LORA, Quantization using LLama2, Gradient and Google Gemma model. This crash course includes both theoretical intuition and practical intuition on making you understand how we can perform finetuning.

Timestamp:

00:00:00 Introduction
00:01:20 Quantization Intuition
00:33:44 Lora And QLORA Indepth Intuition
00:56:07 Finetuning With LLama2
01:20:16 1 bit LLM Indepth Intuition
01:37:14 Finetuning with Google Gemma Models
01:59:26 Building LLm Pipelines With No code
02:20:14 Fine tuning With Own Cutom Data

-------------------------------------------------------------------------------------------------
Support me by joining membership so that I can upload these kind of videos
-----------------------------------------------------------------------------------

►Data Science Projects:

►Learn In One Tutorials

End To End RAG LLM APP Using LlamaIndex And OpenAI- Indexing And Querying Multiple Pdf's

►Learn In a Week Playlist

---------------------------------------------------------------------------------------------------
My Recording Gear
Рекомендации по теме
Комментарии
Автор

Thank you very much Krish for uploading this.

BabaAndBaby
Автор

Amazing as always! So great tutorials and clear explanations! Thank you!

svitlanatuchyna
Автор

Full Respect to you Krish, Great video !!

dvbsagar
Автор

Awesome presentation Krish !!!! You are a superstar!!!

anuradhabalasubramanian
Автор

Amazing content, big fan of you :) Much love from Hawaii

lalaniwerake
Автор

just getting your video at the right time !! Cudos brother

senthilkumarradhakrishnan
Автор

Krish...yet again!! I was just looking for your finetuning video here and you uploaded this..I cant thank you enough..really 👍😀

prekshamishra
Автор

Thank you so much for such an comprehensive tutorial. Really love your teaching style. Could you also refer some books on LLM fine tuning.

souvikchandra
Автор

Thank you for an amazing course as always. Can we please get these notes as well. they are really good for quick revision.

sadiazaman
Автор

Hi @krishnaik06,
Thank you again for anther Crash Course.
may I know which tools/software are you using for presentation?

foysalmamun
Автор

Krish, most of the fine tuning done by the existing dataset from HF. however converting the dataset as per the format its a challenging for any domain dataset. How we can train our own data to finetune the model so that accuracy ll be even better. Any thoughts?

Jeganbaskaran
Автор

Hi Krish, the video is really good and more understanding. but I have one reason how to you choose this the right dataset and why? why you choosing that format_func function to format the dataset into the some kind of format. if you have any tutorial or blog please share the link.

AntonyPraveenkumar
Автор

Can you make a good video around how to decide hyper parameters when training gpt 3.5

yashshukla
Автор

Summary of the course.
Course Overview: This crash course by Krish Naak covers theoretical concepts and practical implementation of fine-tuning large language models (LLMs), including techniques such as quantization, LoRA, and CLA PFT.

Fine-Tuning Techniques: The course discusses different fine-tuning methods like quantization-aware training, matrix decomposition, and domain-specific fine-tuning for various applications like chatbots.

Technical Concepts: Explains floating-point precision (FP32, FP16), tensor data types in TensorFlow, and quantization methods (e.g., 4-bit normal float) used to optimize model performance and memory usage.

Implementation Steps: Demonstrates the process of preparing datasets, configuring training parameters (like optimizer, learning rate), and using the LoRA configuration for fine-tuning models such as LLaMA 2.

Practical Application: Provides a hands-on example of loading datasets, setting up the training environment, and fine-tuning a model using custom data, with plans to push the fine-tuned model to platforms like Hugging Face.

muhammadhassan
Автор

Can anyone suggest how to analyze audio for soft skills in speech using Python and llm models?

nitinjain
Автор

Please make a complete playlist to secure a job in the field of Ai

tejasahirrao
Автор

Hi sir, I have tried your llama finetuning notebook to run on colab with free T4 gpu but it is throwing OOM error. So could you please guide

rakeshpanigrahy
Автор

we want more video on fine tuning projects

EkNidhi
Автор

Hi Krish, i Have seen entire video. i am confused with 2terms. some times you said its possible to train with my own data (own data refers from a url, pdfs, simple text etc) but when you are trying to train the llm model you are giving inputs as in certain format like### question : ans.

Now if i want to train my llm in real life scenario i don't have my data in this instruction format right in that case what to do. And its not possible to transform my raw text to into that format right how to handle that situation . is it a only way to fine tune in specific format or i can train given in raw text format i know a process where i need to convert my text to chunks then pass to llm. those are really confusing can you clear those things

rebhuroy
Автор

hey could you tell me what are the pre req to follow this crash course? it would be greatly beneficial!!

maximusrayvego