Agents: LangChain ReAct vs. Open AI Assistants API

preview_player
Показать описание
GPT-4 Summary:
"Discover the Future of AI: LangChain & OpenAI's Latest Tools for Building Complex Applications" - Join our must-watch event for AI enthusiasts and professionals! We'll dive into the innovative world of Agentic apps, exploring LangChain's breakthroughs in data-aware applications and Chain-of-Thought reasoning. Get an exclusive look at OpenAI's Assistants API, revolutionizing the development of agent-like applications. This event is a game-changer for LLM Operations practitioners, aspiring AI engineers, and builders eager to harness the power of LangChain and OpenAI's Assistants API. Don't miss out on learning how to create cutting-edge AI systems, with all demo code provided for you to build and share. Click now to unlock the secrets of advanced AI applications!

Have a question for a speaker? Drop them here:

Speakers:
Dr. Greg Loughnane, Founder & CEO AI Makerspace.

Chris Alexiuk, CTO AI Makerspace.

Apply for the LLM Ops Cohort on Maven today!

Join our community to start building, shipping, and sharing with us today!

How'd we do? Share your feedback and suggestions for future events.
Рекомендации по теме
Комментарии
Автор

Wonderful content! As a beginner I’m confused by all these similar terms: assistants, agents, multi-agents…. Finally a video that clearly compares OpenAI Assistant API and Langchain ReAct Agent. Correct me if I am wrong but so far OpenAI’s Assistant API is pretty inferior, no? Besides not being able to stream and requires us the keep checking the “run”, the lack of reasoning and observation make it quite underwhelming, no? To be honest it’s just an “agent” that can choose a tool one time. If the output of the tool it chose wasn’t useful, then it couldn’t do anything. Whereas ReAct is smart enough to fine tune the parameters it uses to call the functions, choose another function and in general run multiple rounds of function calling based on observation. Sounds much smarter to me. What do you think?

adrienkwong
Автор

Hey guys. Absolutely loved that video! I have two questions and I would love if you could help me with an answer.

1. Is there a way to isolate your LLM's datasource to - for an example - a history.pdf file? So If any questions outside of that document - even something like "what is 1+1?" it would return saying it cannot answer as that info is not in the specified pdf file.

2. You touched on fine-tuning, OpenAI now has fine tuning capability but I didn't understand where that would fit into the framework you layed out. Do we train our LLM's before we start adding tools, functions calls etc?

Thank you so much. Any help is appreciated greatly :)

MichaelBrown-ockx
Автор

This was awesome guys, a good follow-up would be a comparison with Autogen and even llamaindex

bertobertoberto
Автор

Guys your content is definately outstanding! Absolutely love it!

peterc.