GPT-4 Function Calling Python Decorator

preview_player
Показать описание
I created a Python library that adds a decorator for exporting functions for the ChatGPT function calling API, inspired by @memespdf on @sentdex YouTube video about the function calling API.

Рекомендации по теме
Комментарии
Автор

What...a...boss

Thanks man! You just turned OpenAi into Spring lol

jaysonp
Автор

Hey man, great videos! Appreciate all the effort you're putting in. Any chance you'd be willing to dive into some examples showing how to implement the new function calling features in PHP? I haven't found a single person implementing the new models in PHP yet, so that would be a huge help to a lot of people. The new function calling features and larger context window are quite exciting! Thanks

MattHofstadt
Автор

That's an awesome way to make function calls easier and customizable. Anyway man, do you have a gpt-4o-mini version for that. Appreciate it, thanks.

noraasicnarf
Автор

Great video ! To add descriptions of parameters, you could use pydantic models and the Field abstraction. Langchain used something similar to constraint json outputs

AristAristA
Автор

this is awesome glad i found the video makes it so much easier on the project im working on to make GPT fully autonomous my Quantum HiveMind hehe 😊

Whisper_InThe_Rain
Автор

Just curious, with the weather example, if you had asked what the weather for, then just said "Boston" when it asked for location instead of "weather in Boston" would it work correctly?

tomba
Автор

Hey, great video! Question, when you respond to openai letting them know that the function has been successfully called (eg., "File Successfully Written'), how does openai know that that response is coming from the system itself letting it know the function has been called instead from the user him/herself just sending a message? Do you tag that request somehow or does it just know from the context of the request and the message itself?

hickam
Автор

The need to always send that functions list or context for me makes no sense. Wouldn't it better that Openai could receive a conversation_id and store context automatically?? So I just need to send the user input when using chat mode of gpt API.??

And when the conversation_id changes, it's a new chat. This would save on data sent to the API...

Functions could also be stored on the openAI dashboard... At least we could just send the functions we want to use as argumenrs like alloe_functions:functionA, functionB

unknotmiguel
Автор

I have try but get something wrong, what can I do?
"""
Go ahead, ask for the weather, a YouTube channel recommendation or to calculate the length of a string!
You: how long is the sentence ""how long it is""
Traceback (most recent call last):
File "/Users/rockets/github/ai-agent/chatbot.py", line 123, in <module>
run_conversation(prompt)
File "/Users/rockets/github/ai-agent/chatbot.py", line 87, in run_conversation
function_response =
File "/Users/rockets/github/ai-agent/openai_decorator/openai_decorator.py", line 86, in wrapper
return func(*args, **kwargs)
TypeError: calculate_str_length() missing 1 required positional argument: 'string'
""""

rocketscn