The BEST Mental Model for Optimizing Your LLMs - Part 1

preview_player
Показать описание
🤔 Looking to implement your own LLM-based RAG system, but don't know where to begin?

In this episode, join Angelina and Mehdi, for a discussion about strategies to optimize your LLM performance.

What You'll Learn:
🔎 What options do you have to optimize your performance and when do you use them?
🚀 How far can prompt engineering go?
🛠 The best mental model for strategizing your happy path.

✏️ In This Episode:
00:00 Intro
00:16 What options do I have for optimizing LLM performance?
00:38 What is the first thing we should do to optimize LLM accuracy?
02:38 What is context optimization?
04:12 What is LLM optimization?
05:00 The best mental model for optimizing LLMs
07:53 Prompt Engineering
09:08 Long context window
10:48 How far can we really take prompt engineering?
11:13 What if after prompt engineering the model is still not working very well?
14:23 RAG review

Stay tuned for more content! 🎥 Thanks you for watching! 🙌
Рекомендации по теме
Комментарии
Автор

thank you so much for the information you share, guys. despite my concentration problems, I find myself at the end of the video without even realizing it. you are among the few people I can truly learn something from. lady cute and mr charisma :D

spookymv
Автор

Super underrated channel. Just binge watched 5 videos today!

joyalajohney
Автор

Hi mehdi, as my experience has shown me, for tasks with structured data, fine tuning is good option. For non structured RAG is better way to increase the accuracy of output

vahid_afshari