Mastering LangGraph: Agentic Workflows, Custom Tools, and Self-Correcting Agents with Ollama!

preview_player
Показать описание
Dive into the world of LangGraph with our latest tutorial! This video takes you through a comprehensive journey of leveraging LangGraph to create powerful, stateful AI agents. Here's what you'll learn:

Defining Custom Tools: Learn how to use the @custom_tool annotation to streamline your workflow and eliminate the need for LangChain.
Tool Calling and Function Calling: Understand how to perform tool calls and integrate function calls within your agentic workflows.
Self-Correcting Agents: Discover the magic of self-correcting agents that can reason with themselves, ensuring more accurate and reliable outcomes.
Saving and Loading Graphs: Step-by-step guide on how to save your LangGraph to a file and reload it, making your AI development process more efficient.
Utilizing Ollama with LangGraph: Explore the capabilities of the latest Ollama models (llama3.1:70b and llama3.1:8b-instruct-q8_0) for advanced AI applications.
Advanced Features: Get insights into using model cards, generating system prompts, and managing tokens for optimal performance.
Join me in this in-depth tutorial and enhance your AI projects with LangGraph's robust framework. Like, share, and subscribe for more cutting-edge AI tutorials and updates!

🔗 Links & Resources:
Github:

Watch full series on LangGraph here:

#LangGraph #AI #AgenticWorkflow #CustomTools #Ollama #LLM #MachineLearning #AIIntegration #TechTutorial #LangChainAlternative

Ensure to watch the entire video to master these new capabilities and take your AI development to the next level!
Рекомендации по теме
Комментарии
Автор

GitHub repo link is in the description now.

MukulTripathi
Автор

What a perfect explanation of different concepts! Following other tutorials on YouTube is so hard and they leave out critical explanation and pieces for newer developers. This tutorial walks through everything step by step. Absolutely amazing job here. Thank you, please keep it up and I'll be following all your upcoming videos!

ronwiltgen
Автор

Thank you a thousand times, sir!
Every single tutorial is packed with so many "hidden" pieces of information that 10 * 5-minute videos can't even come close to your content. I hope you will create many more tutorials like this. I have absorbed all your videos, recreated them, modified them, etc. If I may make a request, it would be for an explanation of all the different "function calls" that exist. Native function calls, "normal" function calls, tool usage, etc. Is this all the same? For example, what is the difference between built-in (into LangChain/LangGraph) tools/functions and those written by myself?

An explanation of frdel/agent-zero or a smaller rebuilt version of it would also be great 😁🙈

reinerzufall
Автор

i didn't understand the part, as how a llm model is generating responses even if it doesn't know. cause that's what hallusination is ! can you share why is it happening .

MrTapan
Автор

Does the code assume you have an ai server (named a certain way) connected to your network?

DaleIsWigging
Автор

Hi Mukul, these are very informative. Do you have a github repo for the code?

daljeetsingh
Автор

results are poor with the conditional edge when using a lesser model like llama3.1:8b or phi3.5. Lots of "crashing" or incorrect boolean returns.

rayzorr