Fine-tuning or RAG?

preview_player
Показать описание

0:00 Comparing Fine-tuning and Retrieval Augmented Generation
0:34 Using LLMs for Specialized Domains
1:13 Fine-tuning vs In-context Learning Techniques
2:23 Causes of LLM Factual Errors and Hallucinations
3:50 Constructing the Experiment Dataset
4:45 Models Tested and Accuracy Comparison
5:51 RAG Outperforms Fine-tuning Across Models
6:20 Why RAG Performs Better Than Fine-tuning
7:01 Caveats and Open Questions
7:39 Conclusion and Wrap-up

Рекомендации по теме
Комментарии
Автор

Surprised, save me few weeks of testing 🙌❤️

andrew.derevo
Автор

Thank you for saving my group meeting! Your video helps a lot!

karinlv
Автор

Just wanted to say I really appreciate your videos. Everything is short and concise and I love that you’re always using papers as the foundation for the conclusions. Keep it up!

HampusAhlgren
Автор

Very interesting paper, thanks for covering!

sashaha
Автор

Seems clear that for 'current events' rag is going to win, but for broader, domain specific themes or logic, how does fine tuning stack up? E.g. create code using our internal suite of APIs... If context is big enough, icl should be fine, but rag may miss some key docs based on semantic similarity alone... I guess... I should write a paper 😂

RoulDukeGonzo