LangGraph Crash Course with code examples

preview_player
Показать описание

Interested in building LLM Agents? Fill out the form below

Github:

Time Stamps:

00:00 Intro
00:19 What is LangGraph?
00:26 LangGraph Blog
01:38 StateGraph
02:16 Nodes
02:42 Edges
03:48 Compiling the Graph
05:23 Code Time
05:34 Agent with new create_open_ai
21:37 Chat Executor
27:00 Agent Supervisor
Рекомендации по теме
Комментарии
Автор

If you are interested in building LLM Agents Fill out the form below for what type of agents you want some examples of ?

samwitteveenai
Автор

I really like the idea of integrating Grpah theory into this. You can experiment with different agents and tools for certain types of tasks. Then you can start playing around with network measures and give edges weight based on the successful completion of types of tasks. The network essentially will end up balancing itself out as you start to direct traffic along your high-weight edges. You can run another network and experiment with different models for different tasks. It's like a simulation of a workplace where people end up going to the most productive people to accomplish tasks.

viktor
Автор

This is ABSOLUTELY FANTASTIC!!! I've been dealing with manual "orchestrator" that felt so dummy before... This is a game changer! You effin deliver on your content, man! Holyfack...

avidlearner
Автор

Thanks, this is great stuff. I've been teaching myself to build agents in langchain for some months and it is slow going. I think I need to step back and re-architect to use LangGraph instead. Looking forward to seeing more of your material on this stuff!

touchthesun
Автор

Thank you for going through the notebooks line by line. Helps noobs like me follow along.

tvaddict
Автор

Awesome work once again !
Very interested by this langGraph for more complexe use cases ! For us building a team platform augmentation with many Agent (which are Agent, or just Chains), it can allows use to have a big and powerful super agent, an supervisor as you in third part. To be continued

joffreylemery
Автор

Great video as usual. Yes, more videos and use cases on building agents with the updated version of LangChain would be great.

paulmiller
Автор

Sam I watch all your videos from Colombia. They are awesome!! they explain really well.

luisguillermopardo
Автор

This video is timely as I was ready to start exploring LangGraph to get a feel for what use cases can fit. Deeper dive video will be much appreciated.

kenchang
Автор

Sir please make a full course using langchain with open, hugging face, lama and fine thing models and chatbot. Keep little bit affordable like 100$ it would be really great . Lots of love from India

shobhitagnihotri
Автор

This is such a great intro, thank you so much for the effort.

rupjitchakraborty
Автор

very insightful but heavy stuff to master. thank you, Sam. ❤

guanjwcn
Автор

Super useful. I would say this was explained in a better way than the official Langchain channel.

Next Video: it would be cool to build the perplexity’s copilot feature. So, ask for clarifying questions if needed with human-in-the-loop feature. Then give access to the internet to get the results.

VibudhSingh
Автор

Your video is great. However, it presumes prior knowledge of the Langgraph ecosystem. For example, @11:23, the Trace page and the inherent setup that was done there is not explained. Your collab 01 as well: try running it incognito as a viewer: once you reach the 'prompt' cell, things start breaking apart. Overall, you are engaging, knowledgeable but the video could use a warning at the beginning to inform the viewer the requirements like knowing the Langgraph ecosystem etc. I'm subscribing nonetheless, hope you see this as a constructive comment Sam.

lesptitsoiseaux
Автор

thanks again for the awesome content i ve learned a lot from your videos please keep doing what do. I was also wondering if you plan on making videos about production ready RAGs with the methods that you talked about in your rag series. Thanks a lot and please keep enriching us with your content.

ahmedennaifer
Автор

you mentioned something on point to what i was wondring sam, with you experience which oif the open-source LLM support function calling as of today? which one would you try out first? and if you do please make a video about langgraph and HF LLM and function calling maybe! ☺, love you work btw!

RADKIT
Автор

📝 Summary of Key Points:

📌 Langgraph is a graph-based system for building custom agents in the Langchain ecosystem. Nodes represent different components of an agent, and edges connect these nodes to enable decision-making and conditional routing within the agent.
🧐 The video provides coding examples to demonstrate Langgraph's functionality. Examples include building an agent executor using custom tools, using a chat model and a list of messages for more complex conversations, and creating an agent supervisor to route user requests to different agents based on predefined conditions.

💡 Additional Insights and Observations:

💬 "Langgraph is a powerful tool for building custom agents with decision-making capabilities."
📊 No specific data or statistics were mentioned in the video.
🌐 The Langchain ecosystem and Langgraph provide a flexible framework for creating various types of agents.

📣 Concluding Remarks:

Langgraph is an innovative tool within the Langchain ecosystem that allows users to build custom agents with decision-making capabilities. The video showcases coding examples to demonstrate the functionality of Langgraph and encourages viewers to explore different use cases. Langgraph provides a flexible and powerful framework for creating agents, making it a valuable tool for developers.
Generated using TalkBud

abdelkaioumbouaicha
Автор

Thanks Sam. For colab 01, I tried inputs = {"input": "Give me a random number and then write in words", "chat_history": []}.. it is still calling to_lower_case tool.. is it expected or we have to be more vocal in our input?

ankitjain
Автор

Great! Do you have any example notebook showing how to use Langgraph for code generation in an external compiler language? Like, C for example - how do you replace the "exec" command (which is for Python code only, an "internal" compiler), and replace it with something that can call the C compiler, run it against the generated (and saved) code file, collect the compiler errors, put them back into the langgraph flow in the relevant node, and so on.

PrashantSaikia
Автор

Pretty much every LLM API has a large set of parameters: temperature, max output length, top P, [top K], frequency penalty, presence penalty.

Shrink-wrapped UIs like ChatGPT don't give access to these. The defaults differ in some APIs: sometimes temperature is set to 1, sometimes 0.8.

Some experiments I've done indicate that changing these parameters has serious impact on the results. But I've hardly ever seen benchmarks, papers, videos that discuss this. As far as I can tell, most LLM benchmarks only test the "default" settings.

I'd love to see some more in-depth experiments that compare models and change these parameters.

The community has been trying a lot of elaborate optimizations to get the most desired results out of LLMs. But my partial experiments suggest that there's a fair bit of untapped potential with the model parameters.

AdamTwardoch