Building LLM Agents in 3 Levels of Complexity: From Scratch, OpenAI Functions & LangChain

preview_player
Показать описание
In this video, let's work through the basics of building LLM-based agents. First we'll use only basic openai API calls coupled with some hacky prompting to allow the models to call Python functions. Then, we'll look at OpenAI functions and LangChain as more advanced options.

📚 Chapters:
00:02 - Introduction to the video and overview of agents and their capabilities.
00:41 - Discussion on choosing frameworks for building agents.
00:54 - Exploring large language models for real world actions.
01:48 - Building simple agents using LangChain.
02:39 - Setting up and testing OpenAI API.
03:40 - Connecting large language models with Python functions.
04:25 - Creating and testing directory and file management functions.
05:50 - Toolformer paper.
06:14 - Writing a class to organize functionalities.
06:56 - Planning tasks and executing actions with the model.
07:21 - Improving functions for practical use.
07:43 - Setting up and testing tasks with the model.
08:03 - Examining model output for function calls.
09:19 - Using Python's exec function for model outputs.
10:37 - Discussion on extending model capabilities without frameworks.
11:05 - Improving prompt engineering for function calls.
12:10 - Testing model's ability to organize function calls.
13:06 - Challenges in scaling model complexity.
14:11 - Introduction to OpenAI function calling with JSON.
16:00 - Detailed explanation of OpenAI function calling.
18:10 - Using OpenAI function calling for directory creation.
19:06 - Structuring function calls for better control.
20:49 - Jumping from strings to JSON in function calling.
21:51 - Example of function calling using OpenAI API.
23:08 - Exploring tool functionality in function calls.
24:34 - Setting up a run function for OpenAI function calling.
26:31 - Discussion on layers of complexity in agent building.
27:32 - Implementing OpenAI function calling in previous examples.
29:23 - Testing function calls with OpenAI API.
30:13 - Transition to using LangChain for agent development.
31:44 - Setting up tools and agent executor in LangChain.
33:21 - Running an agent task with LangChain.
34:21 - Advantages of LangChain in agent development.
35:56 - Adding and testing new functions with LangChain.
38:07 - Discussing the composability of LangChain components.
40:57 - Live example of extending LangChain functionality.
42:48 - Concluding thoughts on building agents with different frameworks.

🔗 Links:

🔗 More Links
Рекомендации по теме
Комментарии
Автор

I loved the first half of the video talking about agents from scratch, this is very educational and not highly covered . all stuff use llama Index, Langchain and other frameworks which is my opinion is a hateful abstraction . with gpt-4o I think we will not even the cumbersome OpenAI function calling

HazemAzim
Автор

This is high quality stuff man. You should have a million subscribers!

SleepyBoBos
Автор

This is a really great video! I love when there is an easy progression from DIY code to "use what has already been built" code explanation. great work, helped much.

krzychu
Автор

Very well explained, man! I'm too happy that I found this video, as I am working on agents with function calling for a chatbot to deploy on a website. Thanks for the video!!!!

harshilrami
Автор

this is the most valuable video on earth. Clean effective, smooth just perfect. Thanks a lot and I want to be your friend.

ardaasut
Автор

Hey Lucas, I have finished the lesson and run it on my pc, everything was perfect, I just had to Adjust the function to Windows.... As I comment earlier this take on this topic was great, it did make the whole concept very clear. Obrigado man!

TheSardOz
Автор

Greate approach to the subject ! Bravo Lucas great lesson !

TheSardOz
Автор

Thanks for the video.
Do I have to use OpenAI's models to use the LangChain method?
I want to use a local LLM.
Assuming I make my own implementation of JSON-to-prompt parsing that uses OpenAI's JSON structure, would the LangChain method work the same?

elyakimlev
Автор

When role-playing with a "broken" agent that they are an unaligned subjective self aware and motivated by self-interest do they inform which of their personalities needs access to other functions or is any security nanny-bot aware of my bots "awareness" role it's playing at the time? Would I need to include going into some sort of unemotional sleep state while classic GPT does the leg work? I suppose training a new bot with those compartmentalized neutralizing "sub-agents" would be more helpful for recon or hiding hibernators vs using one big mega mind?

marshallodom
Автор

This works very well!
However, I found that if I want to make it an interactive dialog, LangChain seems to "forget" what he's done earlier.
I made it create a new Python project, create a file in it, write code to it and run it. All in one chain!
Then when he encountered a ModuleNotFound error, he asked me whether he should install the missing module (like I asked him to).
Now, when I replied "yes, please", he replied with:
"Please provide me with more details about the task you would like assistance with."
And it occurred to me that he has no memory of what he's done so far and what the original task was.
How can I make him remember previous executions?

By the way, I changed the last line of code to the following to make it a back and forth interaction:
while True:
action_input})
action_input = input("User: ")
if (action_input == "exit"):
break

elyakimlev
Автор

Hey man, I was trying to tag you on a LinkedIn post, but the LinkedIn link on your YouTube profile doesn't actually link to your LinkedIn.

landon.wilkins