Fine-Tuning Large Language Models (LLMs)

preview_player
Показать описание
Join us for the first session on fine-tuning large language models (LLMs) tailored for Lightricks developers. We will cover:
1) Fine-Tuning Overview: Understanding of what is fine-tuning, and how it differs from prompt engineering and RAG methods.
2) Live Code Example: Practical demonstration of the fine-tuning process using LLaMA 2-7b-chat LLM.

What will you learn?
Fine-tuning overview:
- What is Fine-tuning?
- When to use Prompt Engineering vs. RAG vs. Fine-tuning?
- How to Fine-tune an LLM?
Fine-tuning code example:
- How to implement Supervised Fine-tuning (SFT) in code?
- How to evaluate an LLM?

Chapters:
00:00 - About myself
00:42 - Agenda
01:25 - What You'll learn today
02:18 - Part I -- Fine-tuning overview lecture
02:23 - Pre-training
03:44 - What is Fine-tuning?
05:43 - Fine-tuning Example 1
08:15 - Fine-tuning Example 2
09:49 - Why to fine-tune?
10:41 - Prompt Engineering vs. Fine-tuning
14:05 - Benefits of Fine-tuning your own LLM
17:21 - RAG vs. Fine-tuning
20:13 - 3 Ways to Fine-tune
24:20 - RLHF
27:39 - 3 Ways to Parameter Training
28:58 - LLMs are based on the Transformers architecture
33:43 - LoRA
36:38 - Fine-tuning Iterations Process
38:00 - Part II -- Fine-tuning code example
39:34 - LLaMA-2
40:50 - LLaMA-2 vs. GPT-3
42:10 - Preliminaries
44:25 - Quantization
47:46 - Tokenization
49:40 - Inference
52:10 - Data Preparation
58:27 - Training - Supervised Fine Tuning (SFT)
01:02:20 - Saving, Loading and Exporting the model
01:03:41 - Evaluation
01:14:18 - Questions?
Рекомендации по теме
Комментарии
Автор

Thanks, Oren! Really clear breakdown of fine-tuning vs RAG and Prompt Engineering. Loved the practical example with the medical model. Good job!

AmitShq
Автор

is there a link to the code show during the lecture

RA-svbv