Fine-Tune TinyLlama 1.1B Locally on Own Custom Dataset

preview_player
Показать описание
This video explains in easy and simple tutorial as how to train or fine-tune TinyLlama model locally by using unsloth on your own data.

#tinyllama #unsloth #tinyllama1b

PLEASE FOLLOW ME:

RELATED VIDEOS:

All rights reserved © 2021 Fahd Mirza
Рекомендации по теме
Комментарии
Автор

MAGISTRAL! :D Thanks thanks thanks! :D

SonGoku-pcjl
Автор

Don't we have to uplode the tokenizer too in the HF hub for future inferencing??
and don't we need to merge the PEFT model and base model to get the actual model ?
Please clear these doubts 🙏.

MRARyA-liwt
Автор

Hi, can you make a tutorial on how to finetune this model for inferencing in ollama? I can't get it to work for ollama. Tried possibly all the tutorials in the web for this.

glorified
Автор

how cam i load the saved model and test?

onesecondnanba
Автор

why you decide the training data for tiny llama have to be in that format? Is it somehow defined by tinyllama developers or you just choose iy on your onw? The problem with this format is tinyllama do not understand that "instruction" and "response" are somehow special words and generate this words in the answer.

Iiochilios
Автор

can you store the trained model in a directory ?

ugayashan
Автор

7:52 I get the same error and rerunning it a few times in a row does nothing.

shaigrustamov
Автор

can you provide the collaborative file please?

hellosaloni
Автор

can you provide the code for "dataset preparation code". Awesome Video btw

legendchdou