filmov
tv
Fine-tuning Llama 2 for Tone or Style
Показать описание
Fine-tune Llama 2 (or any huggingface model!) for tone or style using a custom dataset - here, Shakespeare!
*Free Resources*
1. Create Embeddings with OpenAI, marco, or Llama 2.
2. Run inference with injected embeddings
- Run fine-tuning using a Q&A dataset.
*Fine-tuning Repository Access*
1. Supervised Fine-tuning Notebook
2. Q&A Dataset Preparation Scripts
3. Embedding Notebook (Scripts to create and use Embeddings)
4. Notebook to fine-tune for Tone or Style
5. Forum Support
Chapters:
0:00 How to fine tune on a custom dataset
0:15 What dataset should I use for fine-tuning?
0:50 Fine-tuning in Google Colab
2:45 Loading Llama 2 with bitsandbytes
3:15 Fine-tuning with LoRA
3:50 Target modules for fine-tuning
4:15 Loading data for fine-tuning
5:30 Training Llama 2 with a validation set
6:30 Setting training parameters for fine-tuning
7:50 Choosing batch size for training
8:15 Setting gradient accumulation for training
9:25 Using an eval dataset for training
9:50 Setting warm-up parameters for training
10:50 Using AdamW for optimisation
13:20 Fix for when commands don't work in Colab
15:00 Evaluating training loss
16:20 Running inference after training
*Free Resources*
1. Create Embeddings with OpenAI, marco, or Llama 2.
2. Run inference with injected embeddings
- Run fine-tuning using a Q&A dataset.
*Fine-tuning Repository Access*
1. Supervised Fine-tuning Notebook
2. Q&A Dataset Preparation Scripts
3. Embedding Notebook (Scripts to create and use Embeddings)
4. Notebook to fine-tune for Tone or Style
5. Forum Support
Chapters:
0:00 How to fine tune on a custom dataset
0:15 What dataset should I use for fine-tuning?
0:50 Fine-tuning in Google Colab
2:45 Loading Llama 2 with bitsandbytes
3:15 Fine-tuning with LoRA
3:50 Target modules for fine-tuning
4:15 Loading data for fine-tuning
5:30 Training Llama 2 with a validation set
6:30 Setting training parameters for fine-tuning
7:50 Choosing batch size for training
8:15 Setting gradient accumulation for training
9:25 Using an eval dataset for training
9:50 Setting warm-up parameters for training
10:50 Using AdamW for optimisation
13:20 Fix for when commands don't work in Colab
15:00 Evaluating training loss
16:20 Running inference after training
Комментарии