EASIEST Way to Fine-Tune a LLM and Use It With Ollama

preview_player
Показать описание
In this video, we go over how you can fine-tune Llama 3.1 and run it locally on your machine using Ollama! We use the open source repository "Unsloth" to do all of the fine tuning on an SQL dataset!

Throughout this video, we discuss the ins and outs of what fine-tuning an LLM is, how you can work with your data so that the LLM can process it and then importing it into Ollama for you to run locally on your machine!

Ollama is available on all platforms!

Make sure you follow for more content!
___________________________

Twitter 🐦

TikTok 📱

TIMESTAMPS
0:00 Intro
0:10 Getting the dataset
0:45 The Tech Stack
1:18 Installing Dependencies
1:48 Fast Language Model Explained
2:35 LORA Adaptes Explained
3:02 Converting your data to fine-tune
3:36 Training the Model....
4:01 Converting to Ollama compatibility
4:11 Creating a Modelfile for Ollama
4:50 Final Output!
5:01 Check out Ollama in 2 minutes!
Рекомендации по теме
Комментарии
Автор

What data would you fine tune your LLM on?

warpdotdev
Автор

No over rating, no over talking, streight forward, love it.

siddhubhai
Автор

Great video for people who know coding and local llm but havent finetuned!

first-thoughtgiver-of-will
Автор

Excellent tutorial! Doesn’t lowering the bit depth of the model greatly reduce accuracy? What are the pros and cons of doing so? Thanks!

BrentLeVasseur
Автор

Hi there.

It's wonderful.. Will you Please share the notebook and also Google Collab notebook?

AghaKhan
Автор

wouldve been nice if you had shared the full collab code...

xngmi
Автор

Can you train 12B on 24GB or is 12B too big?
Another question is if you have multi-turn data (conversations) can you finetune on that? The examples I see are for Q:A pairs.

..
Автор

Can we train tinyllama to do something similar? Since was trying to run AI on Raspberry Pi 5 (with Hailo AI Accelerator)

Hey.MangoJango
Автор

why didn't you directly use hugginface trainer to train, why using unsloth? I want to know what was the benefit of using unsloth over hugginface trainer.

shashanksinghal
Автор

Congrats for finding such a smartie-cutie as a DevRel for Warp

bistronauta
Автор

Thomas Barbara Brown Barbara Gonzalez Elizabeth

WilliamEngelke-oq
Автор

shes pretty... what was this video about?

samukarbrj
Автор

Moore Ruth Lopez Sharon Johnson William

monicawalkman