Fine Tune Llama 3.1 (8b) - 2X Faster | With Google Colab and 0$

preview_player
Показать описание
In this video, we're going to fine-tune LLAMA 3.1 - 8B on Google Colab at zero cost using a free dataset, aided by libraries from Hugging Face.

We'll also test this model and compare it to GPT-3.5 Turbo, and learn when to fine-tune a model and when to build a RAG system.

We'll make the fine-tuning process easy and explain what LoRA , as well as how the fine-tuning process works.

00:00 - Llama 3.1 8B Details
00:54 - Benchmark
01:20 - Llama 3.1 8B vs GPT 3.5 Turbo
04:05 - Llama 3.1 8B Summarization Abilities
04:38 - Fine Tuning Vs Rag
04:59 - When to fine tune model
06:02 - When to use Rag System
07:39 - Unsloth
07:50 - Fine tuning Llama 3.1 8b on Free Google Colab
10:22 - Run and Save the model we created
12:06 - Code and Resources

_______________________________________________________________

💷 50% Discount Code : A2LH6LZ

_______________________________________________________________

#llama3.1 #finetune #gpt #falcon #ai #llms #gpt #huggingface #autogpt
Рекомендации по теме
Комментарии
Автор

why is the audio so weird? it sounds like many single words spliced together.

seriousbusiness
Автор

I only need help to merge the llama 8B and the new trained model, other code is done. Showing this error, when I am merging the files at the end. AttributeError: 'LlamaForCausalLM' object has no attribute 'save_pretrained_merged'

ss
Автор

My dataset is in CSV, with Instructions and Outputs. How can I use that in this code?

ss
Автор

Can I run flux on sagemaker with jupyter notebook!? I tried it but got some code error. I copied the google colab notebook. any help?

alangabilan
Автор

When I am using this code "model.push_to_hub_merged("My_Modal_Path", tokenizer, save_method="merged_16bit")" it shows this error "TypeError: argument of type 'NoneType' is not iterable". All files are saved successfully, but when unsloth trying to upload it shows this error.

ss