Multi-Agent Function Calling LLMs - CODE for AI Agents

preview_player
Показать описание
OpenAI's new function calling and new OpenAI models (like gpt-4-0613) integrated with python CLASS AI Agents in a code example with GPT-4, AI Agents, Function Calling and external APIs. No LangChain anymore.

Discover new OpenAI's model functionality (JSON) for querying OpenAI's API and then external APIs.

Please noted, that simplifications have been introduced (in the verbal explanations and the code) for educational purposes and augmented understanding. Example: I neglected the error checking code if GPT-4 hallucinated any JSON code answer, etc.

For more official code examples and notebooks on function calling:
Please consult the official notebook ("openai-cookbook") that covers how to use the Chat Completions API in combination with external functions to extend the capabilities of GPT models:

Recommend this notebook for more code examples:
We'll create our agent in this step, including a Conversation class to support multiple turns with the API, and some Python functions to enable interaction between the ChatCompletion API and our knowledge base functions for arXiv conversations:

For the displayed scientific literature (cognitive science) during the video:
All rights with authors:
AI and the transformation of social science research
Science, Vol. 380, No. 6650

AI-Augmented Surveys: Leveraging Large Language Models
for Opinion Prediction in Nationally Representative Surveys

#openai
#gpt4
#agents #ai
Рекомендации по теме
Комментарии
Автор

Hi I am developing ai agent and using function calling.
After getting data from api to by function.
LLM need to do series of steps on function call response data but llm is only doing first manipulation step
Can you help on this issue. Using gpt 4 model

siddheshwarpandhare
Автор

This is so amazing, the way you explain things is not just understandable and fun, but bring back the explanation of how it works! It’s super helpful (13:13) . In any case amazing work! Please keep it up

cesarchicaiza
Автор

So do I have to match my (user) prompt to the function description / function arguments every time I want use this specific function? If the LLM matching process is done once, then I should be able to type another (user) prompt send it to the same LLM, and the LLM works out which function to match it to.

skylark
Автор

It's interesting to see how OpenAI are effectively replacing the orchestrator or planner in libraries like LangChain or Semantic Kernel. This feature looks like it natively replaces Skills or Plugins in Semantic Kernel for example. It'll be interesting to see how this evolves, and if it will slowly provide other planning features to provide a full cognitive architecture for applications to consume.

snarkyboojum
Автор

Thx, make more video of this theme, please

VOKorporation
Автор

Yeah, why ask people if their opinions may have changed, just base research on a static model that has not seen new ideas in a couple of years.

pensiveintrovert