Fine Tuning ChatGPT is a Waste of Your Time

preview_player
Показать описание
Fine-tuning doesn't line up for many problems and teams. Today we discuss why fine-tuning has limitations and why alternative approaches might be better for you despite how major companies are talking about AI. We also glimpse an exciting field of study that is yet to be fully explored!

Рекомендации по теме
Комментарии
Автор

Very good explanation and excellent thinking however the problem is that context Windows or not normally big enough to take all the data. This is why fine tuning is an important part of the mix. The correct usage is a balance between long-term data going into fine tuning and short-term data going into RAG. There will soon be a type of job specifically around the sort of data architecture.

BradleyKieser
Автор

We had a well crafted GPT4 prompt with many tests covering our desired outputs. We took gpt35 and fine tuned it and now it's performing the same. Worked well for our use case!

CitizenWarwick
Автор

Relaying what works and what doesn't is highly valuable. Too few people share their experience. Thank You

Training/Fine-tuning is a very delicate process, it has to be done really well to get really good results. Moreover it's not a well understood process - new discoveries are constantly being made, even at the highest levels of research.

tomski
Автор

Cool but fine tuning is a necessary tool if you want to lock domain specific information that doesn't change frequently into the model while freeing up the context window for more dynamic content. An example: I want to make an AI model that generates quests in a game. For this I need to finetune the model to have the basics of the game universe and such and free up the context window to include the information that is coming from the game world, such as population of each territory, which faction controls which places, the user's location and progress, etc.

techracoon
Автор

I'm far from an expert, but I think at least part of the challenge is when people think fine-tuning is for giving the LLM more DATA; increasing it's knowledge base. That's not what fine tuning is for. It's for customizing the WAY it responds. It's more of a style guide than a knowledge store.

keithprice
Автор

Thank you for making this video. I remember I talked to my friends about a similar concept a few months ago, now I finally know I was not alone! RAG seems like the thing most AI services should have by default.

injeolmi
Автор

Excellent description of the challenges in fine tuning AI models! You got yourself a new subscriber 🎉

adrianmoisa
Автор

Good explanation. However it looks like these two techniques are not mutually exclusive, e.g. it could still be valuable to finetune a model to improve processing of RAG generations without any specific data, while RAG mechanism supplying all the data for each specific generation

YuraCCC
Автор

Can the content for training be collected from ChatGPT-4? For example, after chatting with ChatGPT-4, can the desired content be filtered and integrated into ChatGPT-3.5 for fine-tuning? Is this approach feasible and effective? Are there any considerations to keep in mind?

kingturtle
Автор

I'm not an expert in AI topics, but I really do think the only thing we need is an AI that can just understand and it's just RAG on everything else.
Great and insightful video!

ominoussage
Автор

Currently I am planning and testing about a project which will rely heavily on RAG and I think I will have to also consider fine-tuning, becasue of the way I need the model to format, reference and present information from multiple documents. Still wrapping my head around how to produce the training data, but at the moment my impression is that, at least in my case study (a specialized and niche knowledge base about music and musical research), even RAG requires quite a bit of work to fragment the documents in ways that guarantee reliable retrieval.

aldotanca
Автор

Waow it's qualitative! You won another sub :) soon you ll be big, I can see that, continue to work hard

arthurguiot
Автор

I would like to get your advice for creating conversational chatbot. Do RAG or Finetune be suitable because we have a CourtLAW based dataset that contains 1000's of PDF which is unstructured dataset of paragraphs?

gopinathl
Автор

Great Video!
What is the whiteboard app that you are using?

JoshKaufmanstuff
Автор

I wonder how performance of RAGs will vary with integrating generative and retrieval processes. Seems like it would be difficult to optimise, plus more expensive computationally. Definitely the way forward though

CyrusNamaghi
Автор

why, why so tiny amount of subscribers. Very much needed approach to problems, to tell 'wait a minute... here are the stones on the road"

MaxA-wdqo
Автор

Nice video dude. What is that app you are using to visualize your message.

joshmoracha
Автор

I am very new to this AI field, thank you very much for explaining in simple terms !

Arashiii
Автор

I would say you are actually looking for Embeddings. You can set up a database with Embeddings based on our specific data which will be checked for similarities. The matches would then be used to create the context for the completions api. Fine tuning is more to modify the way how it answers. This was my understanding.

korbendallasmultipass
Автор

Would that be different with the recently introduced custom-gpts which allow you to personalize your model based on your specific instructions and provide it with your own contextual documents for reference?

cyclejournal
welcome to shbcf.ru