filmov
tv
LLAMA-2 🦙: EASIET WAY To FINE-TUNE ON YOUR DATA 🙌

Показать описание
In this video, I will show you the easiest way to fine-tune the Llama-2 model on your own data using the auto train-advanced package from HuggingFace.
Steps to follow:
---installation of packages:
!pip install autotrain-advanced
!pip install huggingface_hub
!autotrain setup --update-torch (optional - needed for Google Colab)
---- HuggingFace credentials:
from huggingface_hub import notebook_login
notebook_login()
--- single line command!
!autotrain llm --train --project_name your_project_name --model TinyPixel/Llama-2-7B-bf16-sharded --data_path your_data_set --use_peft --use_int4 --learning_rate 2e-4 --train_batch_size 2 --num_train_epochs 3 --trainer sft --model_max_length 2048 --push_to_hub --repo_id your_repo_id -
⏱️ Timestamps
Intro: [00:00]
Auto-train & installation: [00:17]
Fine-tuning - One Liner: [02:00]
Data Set Format: [05:30]
Training settings: [08:26]
LINKS:
All Interesting Videos:
#llama #finetune #llama2 #artificialintelligence #tutorial #stepbystep #llm #largelanguagemodels #largelanguagemodel
Steps to follow:
---installation of packages:
!pip install autotrain-advanced
!pip install huggingface_hub
!autotrain setup --update-torch (optional - needed for Google Colab)
---- HuggingFace credentials:
from huggingface_hub import notebook_login
notebook_login()
--- single line command!
!autotrain llm --train --project_name your_project_name --model TinyPixel/Llama-2-7B-bf16-sharded --data_path your_data_set --use_peft --use_int4 --learning_rate 2e-4 --train_batch_size 2 --num_train_epochs 3 --trainer sft --model_max_length 2048 --push_to_hub --repo_id your_repo_id -
⏱️ Timestamps
Intro: [00:00]
Auto-train & installation: [00:17]
Fine-tuning - One Liner: [02:00]
Data Set Format: [05:30]
Training settings: [08:26]
LINKS:
All Interesting Videos:
#llama #finetune #llama2 #artificialintelligence #tutorial #stepbystep #llm #largelanguagemodels #largelanguagemodel
Комментарии