Function Calling in Ollama vs OpenAI

preview_player
Показать описание
Function Calling is awesome even though it’s a terrible name for the feature. Function calling doesn't call the function, but makes it possible for you to call a function.

Sorry, posted this one before at a low resolution.

If you think I am wrong about the OpenAI feature, share an example using the core OpenAI API and let me know. I will update the video right away. A few folks have tried, but everyone points at other products from OpenAI and not the one covered in this video. Happy to admit when I am wrong, but prove it first.

Рекомендации по теме
Комментарии
Автор

Wow, this was a contentious one. Some folks are really eager to prove that function calling does call functions, or that I was just wrong in this video. But they just say it without providing any supporting information.

So if you think that I am wrong, and what I say in this video is not 100% factual, then please provide some sort of documentation or a tutorial or a video or anything that suggests that function calling in the base OpenAI API is not exactly how I described it. Don't just point to some other product as your example. If you have any evidence, I would be happy to do an update video.

This video talks specifically about the base OpenAI API. Otherwise, simply accept that function calling doesn't call functions and instead is all about formatting the output, which is incredibly valuable. It certainly improved at that when the json format option came available later on.

technovangelist
Автор

Love the way you add complexity to the answer, it makes the it the whole concept less scary. Thanks for sharing.

wayneqwele
Автор

This is awesome thanks, and just what I needed. Returning consistent JSON is critical for building applications on top of local LLMs.

thehve
Автор

Damn, this is the type of content, the style of content i pay my internet for. The simplicity when you walk through, no nonsense. Insane!

theyashwanthsai
Автор

I admit that until now I misunderstand function calling. Now I get it. I think "format JSON" is the better name. Thumbs up, Matt!

trsd
Автор

I was confused about function calling as I could not figure out how the model was calling a function. The model was just returning json formatted data. Thanks for the clear and concise explanation.

dikranhovagimian
Автор

Thanks Matt for another great super clear video. You are a fine teacher and I'm really happy to have AI related creators that are not just on the hype train but actually thoughtfully looking at the tools in a grounded sort of way. Cheers and I've been to the island where you live a couple of times. Really nice vibe there!
P.S. For other video ideas, I'd love you to explain some approaches to working with longterm memory and chat history in Ollama. It would be nice to be able to save and load past sessions like you can in chatgpt, but beyond that to save the chat history to a vector db and then do RAG as needed for the model to recall certain things we have chatted about before, Like my dog's name or my favorite Band. that would be cool! Thanks

jacobgoldenart
Автор

You are great! No hype, no nonsense, straight to the point and super-clear! Thank you for making your videos. And sorry to hear that it took a bite from your family time.

SashaBaych
Автор

Yes, I think function calling is very important. Maybe a video about fine tuning a model for function calling. ;-). Great video. Thanks for taking time out to share your knowledge.

Canna_Science_and_Technology
Автор

I found this really helpful thank you. An interesting follow-up would be to explore the “instructor” lib that uses pydantic and has auto-retry logic if the LLM misbehaves.

JulianHarris
Автор

Thanks for silently staring at me 10 seconds, I needed it.

malipetek
Автор

this video is literally worth a million bucks (someone is gonna make a million dollar app, haha). TY Matt for sharing with us freely! Nice job!

phobosmoon
Автор

Don't care about the naming debate. But want to point out, that you don't pass in just a single function description, you can pass-in multiple. The functions take parameters and the model chooses what parameters to pass to which functions. Some parameters might be optional or have heuristics in their description explaining to the model when they are best used. The model does call a function by responding to you in a certain format, and you end-up calling the function, but the choices of parameters is up to the model. This is not just about formatting the responses and is a great way to enrich the capabilities of an agent with "reasoning" as the results of the function calls expand the context with new extra information. When I need the model to respond in a certain format - I just ask it to do that, without function calling. When I need the model to "try" to "reason" and call the right function with right parameters to expand context - I use function calling.

zaripych
Автор

Most accurate name I can think of is "generate function arguments". "Format JSON" could be a beautifier, or transforming a different format to JSON.

nuvotion-live
Автор

Matt - brilliant video as always, thank you. Experimenting here with "function calling" so this is of huge benefit. Appreciated.

moonfan
Автор

Totally agree, I actually add an LLM Agent in my flows to check the response matches the desired JSON and can be parsed, if it can't it re-querys the previous Agent with extra prompting to say, make sure you follow the JSON. This also works with models without function calling built in.

jesongaming
Автор

I like the pace and tone of your videos

workflowinmind
Автор

This was really eye opening, thanks for enlightening me! Love your channel! Keep up the good work!

JonzieBoy
Автор

Great video! I think it would be useful to include the use of the stop sequence in this context.

qyztxqr
Автор

Thank you for sharing your insights on the 'function calling' feature. I appreciate your perspective and the effort to demystify this concept for a wider audience. I'd like to offer a clarification on the function calling feature as it pertains to AI models, particularly in the context of OpenAI's API, to enrich our understanding.

Function calling, as it is implemented by OpenAI, does indeed involve the model in the process of generating responses that incorporate the outcomes of predefined functions. The key distinction lies in how 'calling a function' is conceptualized. Rather than formatting the output for external functions to be executed by the user's application, function calling enables the model to leverage internal or predefined functions during its response generation process. This means the model dynamically integrates the results of these functions into its output.

The feature is named 'function calling' because it extends the model's capability to 'call' upon these functions as part of forming its responses, not because the model executes functions in the traditional programming sense where code is run to perform operations. This naming might seem a bit abstract at first, but it accurately reflects the model's enhanced capability to internally reference functions to enrich its responses.

Understanding function calling in this light highlights its innovative approach to making model outputs more dynamic and contextually rich. It's not just about formatting data but about embedding the function's utility directly into the model's processing pipeline. This feature opens up new possibilities for integrating AI models with complex data processing and analysis tasks, directly within their response generation workflow.

I hope this explanation clarifies the intent and functionality behind the 'function calling' feature. It's indeed a powerful tool that, when understood and utilized correctly, significantly broadens the scope of what AI models can achieve in their interactions.

WalterStroebel