EASILY Train Llama 3.1 and Upload to Ollama.com

preview_player
Показать описание
Unlock the full potential of LLaMA 3.1 by learning how to fine-tune this powerful AI model using your own custom data! 🚀 In this video, we’ll take you through a step-by-step guide to train LLaMA 3.1, save it on Hugging Face, and Olama. Perfect for businesses looking to leverage AI with their private data! 🌟

Coupon: MervinPraison (50% Discount)

🔍 What You’ll Learn:
• Why fine-tuning is essential for custom data 📊
• Training the 8 billion parameter LLaMA 3.1 model 🦙
• How to save and deploy your model on Hugging Face and Olama 🌐

🔧 Steps Covered:
1. Configuration setup and data formatting ⚙️
2. Pre-training model evaluation 📉
3. Data loading and training with SFT Trainer 📥
4. Post-training model evaluation and saving 🚀
5. Uploading the model to Hugging Face & Olama 🛠️

💡 Benefits:
• Custom AI model tailored to your specific needs 🎯
• Easy deployment and accessibility on various platforms 🌍
• Enhanced performance with less memory usage 💾

🔗 Links:

0:00 - Introduction to LLaMA 3.1 fine-tuning
1:07 - Overview of the video content
2:29 - Configuration
4:52 - Loading the dataset
6:40 - Training the model
8:12 - Saving the model
9:13 - Running the code and observing results
10:16 - Saving the model to Ollama
10:36 - Creating GGUF format
11:34 - Creating Ollama Modelfile
12:32 - Creating the model in Ollama
12:57 - Testing the model with Ollama
13:22 - Pushing the model to Ollama
14:17 - Final steps and conclusion
Комментарии
Автор

Fantastic detailed tutorial Mervin! Absolutely love this!

danielhanchen
Автор

Man, you explained everything so so well!

francosbenitez
Автор

Super awesome tutorial! Many thanks, Mervin!

grtbigtreehugger
Автор

It is super clear to understand and apply into my use case. Thank you so much!!

fhcwugf
Автор

Thanks for this tutorial! I usually use Unsloth but their Ollama notebook was more advanced so having the video is very helpful.

lemonsqueeezey
Автор

Brother, you are becoming the guy with the coolest nickname among me and my friends, like, "Hey did you watch The Amazing Guy's new video?"

Dr.UldenWascht
Автор

SO CLOSE! Great video :) This ALMOST worked ... but failed with the errors " xFormers wasn't built with CUDA support/ your GPU has capability (7, 5) (too old)". I'm running this on an AWS EC2 G4dn.xlarge (16GB VRAM). Gonna try again with TorchTune instead. Wish me luck!

MrMroliversmith
Автор

It seems we've got different definitions of the word easy.

Hexdus
Автор

I watched one of your Florence-2 videos a couple weeks ago and was very impressed by your workflows. Now with Llama 3.1, you can get even better vision (at least for the 8B parameter model). The model I came across was Llama-3.1-Unhinged-Vision-8B by FiditeNemini. It pairs very nicely with mradermacher's Dark Idol 3.1 Instruct models, surely it would work with several other finetunes. Perhaps someone might have done or will do vision projector models for the Llama-3.1 70B and 405B models.

EM-yctv
Автор

Is finetuning the best way to give data to a model? I think if the information is updated quickly, like documentation etc. I don't think fine tuning is the best way? That would be RAG now that there are long context available for llama3.1.
I have always considered using fine-tuning a model to change "behaviour" or provided static data, like teaching other languages, or uncesoring. RAG to give it my own data

rodrimora
Автор

Is it possible to do a unsupervised learning by Giving the model first a large corpus of data of a specific domain to make it context aware first and then use supervised fine-tuning??

free_thinker
Автор

How to add llama 3.1 in "laravel PHP" website?

Create a video on this topic. Please 🙏🙏🙏

zareefbeyg
Автор

Were the heck did you get those 4 A6000s? I only have 1 RTX4090 😃 What I've heared is that 24GB VRAM isn't enough, right? How long run the training and what were the costs? Anyway, great video, thanks!

returncode
Автор

Are we able to fine tune the model which is available in the ollama?

deepadharshinipalrajan
Автор

Do you have 4x A6000 on your local machine? I have RTX 4090. I use it for computer vision models finetuning and I finetuned and ran some smaller LLMs.

nikoG
Автор

Open Interpreter + Groq + Llama 3.1 + n8n + Gorilla AI = Lightning speed 100% autonomous agent that automates all workflows with a simple prompt, all open source and free, access to over 1600 API's.

timmcgirl
Автор

Hello Melvin, I find that llama3.1 8b is not great at calculation, can I fine tune it?

wohorexy
Автор

Maybe im off here's but like is there a way to just use Llama 3.1 and upload your files to it somehow, or do you gave to go throghh this process? Plus i dont want my private data on hugging face

Derick
Автор

Hello sir, Can you tell me how to fine tune and deploy llama3 models on Amazon Sagemaker using notebooks ?

Ajith-it
Автор

great vieo Mervin.
I have one simple question . can i change the alpaca prompt language besides english, lets say in french, if i will use a french dataset for french language. Does it work like that ?

KleiAliaj