LLM Tool Use - GPT4o-mini, Groq & Llama.cpp

preview_player
Показать описание

VIDEO RESOURCES:

OTHER TRELIS LINKS:

TIMESTAMPS:
0:00 Function Calling - Cheap, Fast, Local and Enterprise
0:58 Video Overview
2:48 Tool Use Flow Chart
10:18 Function preparation tips
14:36 Code walk-through for function / tool preparation
23:50 Prompt preparation and Recursive tool use
34:58 GPT4o-mini tool use performance
37:29 Zero shot prompting and Runpod Phi-3 endpoint setup
49:34 Phi-3 Mini Zero Shot Performance
53:20 Parallel function calling with Phi-3
59:43 Low latency tool use with Groq - Zero shot
1:01:29 Groq Llama 3 8B Zero Shot Tool Use Performance
1:03:11 Groq Llama 3 8B Fine-tune Performance
1:05:42 Groq Llama 3 70B Fine-tune Performance
1:17:26 Final Tool Use Tips
1:19:00 Resources
Рекомендации по теме
Комментарии
Автор

finally someone who isn't a langchain cowboy. nice walk-through, cheers.

robcz
Автор

I like your videos, man. keep up the good work! 👏

javadkhataei
Автор

Best function calling explanation .thank ou

loryo
Автор

Re error correction for multiple tool calls, Have you explored one shot logic (on the response) where the wrong tool is called first, the tool call is incorrectly labeled as parallel or series, and etc. Seems like a robust error correct script on the response ‘could’ catch a lot of these errors. And, some expected schema result in the prompt itself. Seems do-able. I'll watch again and see if I can answer my own question. My hope is to integrate tool calling for applications beyond R&D. Great Video!

GrahamAnderson-zx