How To Finetune Mixtral-8x7B On Consumer Hardware

preview_player
Показать описание
In today's video, I discuss the new state-of-the-art model released by Mistral AI called Mixtral. This model is an 8x7x mixture of experts (MOE) model, which outperforms Llama 70B while being significantly times faster. It only activates two of the expert models at a time, resulting in roughly 7 billion parameters being activated in a forward pass for each token.

I go over the details of the model and how to fine-tune it on custom datasets to unleash its full power. I provide step-by-step instructions on how to use the fine-tuned LLMs and an instruct dataset to create an instruct model. I also discuss the hardware requirements, including the need for roughly 48GB of VRAM total(two RTX 3090s or RTX 4090s) and at least 32GB of RAM.

I explain the process of creating the dataset using the Dolly 15K dataset and the format of the instruct model. Additionally, I provide a walkthrough of the fine-tuning process using the Finetune_LLMs software, highlighting the important flags and options.

I discuss the performance characteristics of the fine-tuned model and demonstrate how to use the text generation inference to get results. I also give some thoughts on the future of mixture of experts models and the potential to enhance the model by selecting more experts at a time.

If you're interested in fine-tuning the Mixtral model and gaining insights from custom datasets, this video provides a comprehensive guide. Don't forget to like the video, subscribe to the channel, and join the Discord community for further discussions. Stay brilliant!

#MistralAI #MixtralModel #FineTuning #MOEModel #CustomDatasets
#GPT3 #GPT4 #GPT #Llama #ai

00:00 - Intro
00:32 - Model Overview
02:52 - Software And Hardware Requirements
07:29 - Creating Instruct Dataset
11:53 - Setting Up Finetuning Software
13:55 - Finetune Program And Flags
17:28 - Finetuning
19:49 - Testing Finished Model
21:10 - My Thoughts
22:13 - Outro
Рекомендации по теме
Комментарии
Автор

Hi Blake, Thank you very much for the video. Could you please upload a tutorial on text-generation-inference? and in your previous LLM finetuning you were using Deepspeed and finetuning the whole model could you please advise if the same can be done on Mixtral 8x7B?

lyf
Автор

can you also please add the commands in the description of your video so its easier to copy paste?

GaneshKrishnan
Автор

but the link isn't up in the corner right now :'(

lewing-alt
Автор

I can't get this to run with your exact commands and weird file formats. It keeps throwing an error, "response template not set" Which is odd because there is no variable for response template. I tuned off complition_complete it ran the fine tune.

Edit: I'm an idiot on the saving checkpoints but the completion complete part still wasn't working for me.

inbox-AI