Local LLM Fine-tuning on Mac (M1 16GB)

preview_player
Показать описание
Here, I show how to fine-tune an LLM locally using an M-series Mac. The example adapts Mistral 7b to respond to YT comments in my likeness.

More Resources:

--

Intro - 0:00
Motivation - 0:56
MLX - 1:57
GitHub Repo - 3:30
Setting up environment - 4:09
Example Code - 6:23
Inference with un-finetuned model - 8:57
Fine-tuning with QLoRA - 11:22
Aside: dataset formatting - 13:54
Running local training - 16:07
Inference with finetuned model - 18:20
Note on LoRA rank - 22:03
Рекомендации по теме
Комментарии
Автор

Really excited to finally get this working! I know many people had asked for it. What should I cover next?

ShawhinTalebi
Автор

Thanks I have been using Unsloth remotely for fine tuning. Once the cloud bills start coming in, I am hoping to convince my boss that a macbook pro can be an option. My MLX are still just open tabs, glad to see someone doing it as well.

JunYamog
Автор

Wow, that was incredibly precise and helpful! Thank you, and keep up the fantastic work with your videos!

azadehbayani
Автор

Didn't know you could do this on Mac! Amazing, thank you!

ifycadeau
Автор

An easy video w/ great explanation to watch 👍🏽

kaldirYT
Автор

great tutorial, thanks. One question, I didn’t understand where is the fine tuned model on my Mac and is it possible to run the model in Ollama?

LucaZappa
Автор

Thanks, great content! I really like the calm way you explain it all 👌

pawelw
Автор

I binge watched your videos - high quality great content. Thank you so much, please keep it up! <3

eda-unzr
Автор

Really cool and helpful. Thank you very much. Have you perform fine-tuning in llama3.1 models successfully with this method?

ISK_VAGR
Автор

I was waiting for this video. Thank you so much.

chetanpun
Автор

Amazing video. Thanks for sharing such valuable content.

AbidSaudagar
Автор

Love the video thank you for these concise tutorials!
On initial inference before moving onto Fine-Tuning I can't get the generation step to produce any tokens.

lorenzoplaatjies
Автор

There are some rumors going around that 16GB should now be the standard memory configuration offered on the new Mac Mini. Any chance that when the M4 Mac Mini launches you can do a video on that as well?

futurerealmstech
Автор

Yes YES YES<, i was desperately looking for this, tysm tysm

inishkohli
Автор

took me a moment to find this:
parser.add_argument(
"--data",
type=str,
default="data/",
help="Directory with {train, valid, test}.jsonl files",
)
worth mentioning that data file are picked up from /data by default

ShekharSuman
Автор

thanks for the great video .. based on your varied experience, can you make a separate video on data-preparation techniques/methods for fine tuning related task on open source models . hope to get a response from Shaw-human than Shaw-GPT..(just kidding)..😅

absar
Автор

I've been playing around with this trying to see how you'll respond if I made horrible comments about your content - managed to get one slightly angry response 😁. But on a serious note, I love the work and a big fan of the channel now!

acaudio
Автор

Any advice or guidance on how I could deploy this model so that I can use it as a Telegram bot? I've been able to plug it into Telegram's API and I'm able to get the bot up and running (locally on my mac), and well, I don't wanna keep my Mac alive just to run the bot! Cheers, thanks for the video!

camperinCod
Автор

What can i expect to achieve on M3 pro 64gb ?

AGI-Bingo
Автор

Can I capture video and audio all day, with a camera in my shoulder, and finetune a model with the data every night?

daan
join shbcf.ru