filmov
tv
Fine-tuning LLM with QLoRA on Single GPU: Training Falcon-7b on ChatBot Support FAQ Dataset
Показать описание
In this video, you'll learn how to of fine-tuning the Falcon 7b LLM (40b version is #1 on the Open LLM Leaderboard) on a custom dataset using QLoRA. The Falcon model is free for research and commercial use. We'll use a dataset consisting of Chatbot customer support FAQs from an ecommerce website.
Throughout the video, we'll cover the steps of loading the model, implementing a LoRA adapter, and conducting the fine-tuning process. We'll also monitor the training progress using TensorBoard. To conclude, we'll compare the performance of the untrained and trained models by evaluating their responses to various prompts.
00:00 - Introduction
01:43 - Falcon LLM
04:18 - Google Colab Setup
05:32 - Dataset
08:15 - Load Falcon 7b and QLoRA Adapter
12:20 - Try the Model Before Training
14:40 - HuggingFace Dataset
15:58 - Training
20:38 - Save the Trained Model
21:34 - Load the Trained Model
23:19 - Evaluation
28:53 - Conclusion
#chatgpt #gpt4 #llms #artificialintelligence #promptengineering #chatbot #transformers #python #pytorch
Комментарии