Is Fine Tuning Models Still a Waste of Time?

preview_player
Показать описание
📋 Summary
What are the limitations of fine tuning AI models like Open AI’s GPT 4o? Today we explore new studies on fine tuning’s effect on model accuracy and hallucinations. We also preview innovative new RAG capabilities, including context caching, and their potential to transform AI development. Lastly, we touch on the influence of hype in machine learning and its various impacts on the field.

🔗 Show Links:

🙌 Support the Channel (affiliate links for things I use!)

#subscribe

Join our AI Discord Community

🚩 Chapters
00:00 Introduction to Fine Tuning
00:43 New Studies on Fine Tuning
04:46 Mapping the LLM's Brain
07:43 Revolutionary RAG Capabilities
14:22 The Hype in Machine Learning
18:28 Conclusion and Final Thoughts
Рекомендации по теме
Комментарии
Автор

Your video saved a lot of time.

Me and my colleagues are working on a project to create a RPG(role playing game) Character Creator with AI.

We intended to "train" the AI with fine tunning to give answears based on the context provided by the universe created by the game master. Based on that he would output traits, personality, backstory, tips for building a good character and more.

Turns out that we would need to treat raw data to "feed" the AI for fine tunning. And I gess it would be too dificult to accomplish.

I feel much more confident on giving the context of the universe as prompt cache. This would make much more sense.

If I fine tuned it, I would have less creative outputs and a kind of BIAS towards its training resulting in a less interesting product.

The downside is that it seems to cost much more.

Thank you for the video.

viniciusgp
Автор

super good video man, keep going! you are definitely going to get big here, you are bringing a lot of value!

dolanbright