Function calling with tools and agent using local LLM model

preview_player
Показать описание
In this video we'll look into how to use function calling with local llm . For this video we are using llama3 running on runpod. This example can be done using mistral v3 & phi3 models as well.

For steps on how to deploy llm model on runpod watch the following video:

If you have any development issue please comment those and I'll try to make a video on them.
Рекомендации по теме