Training a Model in Hugging Face (11.5)

preview_player
Показать описание
This video shows how to use PyTorch to finetune an existing HuggingFace model.

Code for This Video:

~~~~~~~~~~~~~~~ COURSE MATERIAL ~~~~~~~~~~~~~~~
📖 Textbook - Coming soon

~~~~~~~~~~~~~~~ CONNECT ~~~~~~~~~~~~~~~

~~~~~~~~~~~~~~ SUPPORT ME 🙏~~~~~~~~~~~~~~

~~~~~~~~~~~~~~~~~~~~~~~~~~~~

#PyTorch #finetune #huggingface
Рекомендации по теме
Комментарии
Автор

So the title is wrong... you're not training model "in" huggingface, you're training "with" models and dataset "coming from" huggingface. You actually train the model in Colab.

PatriceFERLET
Автор

Thank you so much for this quick demo!

knockonwall
Автор

What does colab pro+ offer you more than plain pro?

keylanoslokj
Автор

Great video. Just out of curiosity, I know I could look through your channel, but do you have a video on quantizing an LLM? Let’s say a 32bit FP to and 8 or 6 bit FP. Pros and cons, besides the obvious smaller and less accurate?

Pure_Science_and_Technology
Автор

yea! huggy face! yup! they've got it all !!

Jibs-HappyDesigns-