How to Generate Structured Output with AI? (Step-by-step Tutorial of SLIM function calls)

preview_player
Показать описание
In this episode, join Angelina and Mehdi for a discussion about LLM function calls - specifically, the SLIM models by LLMWare. These are "slim" (or quantized) models that are optimized for CPU, tailored to solve various tasks that expect formatted output.

00:00 Intro
00:08 What is SLIM (Small specialized function calling models) for multi-step automation?
00:34 Motivation - complex agent workflows + structured output
01:07 LLMWare
03:02 New feature - SLIM (structure language instruction Models)
03:34 What is function calling?
03:44 Unstable LLM output
05:17 The need for stabilizing LLM output for downstream use
05:50 Getting structured output through prompting
06:13 Achieving the same goal with function calling
07:02 LLMWare's SLIM model suite
10:07 Function calling vs. Agents
10:45 Code Walk-through
11:25 SLIM sentiment model
13:16 SLIM topic model
13:44 Quantized models
14:22 Fine-tuning your own SLIM model
16:09 LLMfx
18:03 Run multiple models from SLIM at once
22:46 Pros and Cons for implementing SLIM for production?

🖼️ Blogpost for today:

Stay tuned for more content! 🎥 Thanks you for watching! 🙌
Рекомендации по теме