Building Real-World LLM Products with Fine-Tuning and More with Hamel Husain - 694

preview_player
Показать описание
Today, we're joined by Hamel Husain, founder of Parlance Labs, to discuss the ins and outs of building real-world products using large language models (LLMs). We kick things off discussing novel applications of LLMs and how to think about modern AI user experiences. We then dig into the key challenge faced by LLM developers—how to iterate from a snazzy demo or proof-of-concept to a working LLM-based application. We discuss the pros, cons, and role of fine-tuning LLMs and dig into when to use this technique. We cover the fine-tuning process, common pitfalls in evaluation—such as relying too heavily on generic tools and missing the nuances of specific use cases, open-source LLM fine-tuning tools like Axolotl, the use of LoRA adapters, and more. Hamel also shares insights on model optimization and inference frameworks and how developers should approach these tools. Finally, we dig into how to use systematic evaluation techniques to guide the improvement of your LLM application, the importance of data generation and curation, and the parallels to traditional software engineering practices.

🗣️ CONNECT WITH US!
===============================

📖 CHAPTERS
===============================
00:00 - Introduction
02:31 - LLM novel use cases
07:15 - Fine-tuning LLMs
08:45 - When do you want to fine-tune?
13:42 - Fine-tuning trade-offs
19:03 - Fine-tuning vs continued pre-training
22:41 - Repositories
25:33 - Process of fine-tuning LLMs
41:42 - LoRa
44:31 - Inference frameworks
55:00 - Evaluation measurement for LLMs
1:03:50 - Frameworks vs tools in LLM evaluation
1:08:31 - Domain-specific vs general use case
1:15:55 - Recap and future directions

🔗 LINKS & RESOURCES
===============================

Рекомендации по теме
Комментарии
Автор

My first time on this channel and what an amazing conversation. Subscribe well earned

SethMbhele
Автор

Agree with previous post. Awesome interview and so much great info!
Much appreciated!

realzeti