LLAMA-3 🦙: EASIET WAY To FINE-TUNE ON YOUR DATA 🙌

preview_player
Показать описание
Learn how to fine-tune the latest llama3 on your own data with Unsloth.

Signup for Advanced RAG:

LINKS:

TIMESTAMPS:
[00:00] Fine-tuning Llama3
[00:30] Deep Dive into Fine-Tuning with Unsloth
[01:28] Training Parameters and Data Preparation
[05:36] Setting training parameters with Unsloth
[11:03] Saving and Utilizing Your Fine-Tuned Model

All Interesting Videos:

Рекомендации по теме
Комментарии
Автор

Thank you!
More fine tuning case studies please on Llama 3!
Much appreciated 🙏 your presentation on this!

spicer
Автор

Thank you so much for sharing this was wonderful, I have a question, I am a beginner in LLM model world, which playlist on your channel can I start from ?
Thank you

Joe-tkcx
Автор

Thank your very much for your great video. I ran the workbook but did not manage to find the GGUF files on Huggingsface. I put in my HF-Token, but that did not work. Do I have to change the code?

hadebeh
Автор

Master, have a question, if I have my dataset equal of the Alpaca, I need to upload my dataset to Hugging face to train or I can use my dataset from locally, like my PC? Thanks 👍🏻

juanrozo
Автор

Hello
ilpossible to generate gguf, compilation problem …
Did you try it ?

loicbaconnier
Автор

thank you so much for this useful video!

lemonsqueeezey
Автор

Great video mate. How can i add more than one dataset ?

KleiAliaj
Автор

great video.
But how to add more than one datasets ?

KleiAliaj-usip
Автор

Thank you for the video. Just an observation, the video glosses over how to prep your data. For example, I want to train a model on how to write in my style. How would I prep my data for training?

goinsgroove
Автор

Is there a way to sort of „brand“ llama 3. So that the model responds to „Who are you?“ a custom answer?
Thank you!

jannik
Автор

How to actually train models? And I mean non-supervised training where I have a set of documents and want to learn on it and probably find author's 'style' or tendency?

VerdonTrigance
Автор

How do you train a model by adding the knowledge in a book, which will like only have 1 column of text?

RodCoelho
Автор

Regarding the save option. Do I have to delete the parts that I dont what, or how does this work?

metanulski
Автор

Can you make a video on how to use local llama 3 to understand large c++ or c# code base

shahzadiqbal
Автор

Mediatek's Dimensity chips + Meta's Llama 3 AI = The dream team for on-device intelligence.

scottlewis
Автор

Hi, please what if we have already downloaded a gguf file? How do we apply that locally?

CharlesOkwuagwu
Автор

Is it possible to use a database directly as dataset to fine-tune a LLM ?

balb
Автор

Thank you! but Mac m3 max can use mlx to fine-tune?

dogsmartsmart
Автор

Fantastic work and always love your videos! :)

danielhanchen
Автор

One more comment :-). this Video is about fintung a model, but there is no real explanation why. We finetune with the standard Alpaca dataset, but there is no explanation why. It would be great if you could do a follow up and show us how to create datasets.

metanulski