Train Llama-3 8B on Any Dataset on Free Google Colab

preview_player
Показать описание
This video shares a colab to train Llama 3 8 model. This is unsloth/llama-3-8b-Instruct trained on the Replete-AI/code-test-dataset using the code bellow with unsloth and google colab with under 15gb of vram. This training was complete in about 40 minutes total.

#llama3 #llama3local #rag #lora

PLEASE FOLLOW ME:

RELATED VIDEOS:

All rights reserved © 2021 Fahd Mirza
Рекомендации по теме
Комментарии
Автор

Thanks, Fahd for the video.. I dont see DistributedTrainingJob defined in the notebook. importing it from accelerate fails. possibly it was replaced with Accelerator(). Do you have an updated version of the code? Also, using the full dataset, the codeblock prior to training.train() seems to run a long time (I stopped it after 25 minutes.) Is that expected behavior?

NumericLee
Автор

What's your other video where you go in detail about SFTTrainer's parameters?

nicolo
Автор

i have an error on my instance of t4, there's not more space, it only has 79 GB and they give me 30GB init

camiloalvarez
Автор

Can you create a video that goes beyond hugging face and actually making the model USABLE locally

prestonmccauley
Автор

Sir where is the code in hugging face, I am unable to find

ojaskulkarni
Автор

I have my training data created in local machine and i was looking to upload to google colab and use it, but it is not picking i am getting errors... any light on this will be of great help please. really appreciate your time in helping others

spotnuru
visit shbcf.ru