Reflection Agents

preview_player
Показать описание
In this video, we will show you how to build three reflection style agents using LangGraph, an open-source framework for building stateful, multi-actor AI applications.

Reflection is a prompting strategy used to improve the quality and success rate of agents and similar AI systems. It involves prompting an LLM to reflect on and critique its past actions, sometimes incorporating additional external information such as tool observations.

⏰ *Timestamps*
-----------
00:00 What is reflection?
00:48 Basic reflection Example
04:59 Reflexion
10:25 Language Agent Tree Search (LATS)
12:26 Choosing candidate node in LATS
15:42 Candidate Generation Node in LATS
17:15 Example run of LATS
17:34 Reviewing the run in LangSmith
19:34 Conclusion

🔗 * Links*
-----------

🤔 *Simple Reflection*

🧠 *Reflexion*

🌲 *Language Agents Tree Search*

Developing AI applications is easier with LangSmith. Create a free account at

#ai #artificialintelligence #langchain #nlp #langgraph #agents #search
Рекомендации по теме
Комментарии
Автор

Since LATS is essentially a tree-based ML model, we need to go one step further and implement hyperparameter search. UCB has a c hyperparameter, we could also have something that would tell the model how different we want the 5 new nodes to be (and how many new nodes we want). These can be tuned with standard ML techniques if we have a good training and validation datasets (questions - responses).

slavrine
Автор

LATS is awesome! Would be great to see an episode on this in the langsmith series to show how it can integrate with evaluations

andydataguy
Автор

magnificent work ! keep it up, you rock, guys

JalelTounsi
Автор

My mind... 😵😵‍💫🤯. The last one took me around 2 hours understand it, But it is very useful. Only understand the logic is important for now, then implements your own can also useful depending on the requirements.

insitegd
Автор

Fascinating discussion! However, I’m curious about whether the results of the third concept truly justify its extra cost. 🤔

MrAnalyzer
Автор

Interesting idea, doesn’t look like the design is there just yet. Would be interesting to see if we need to go 1 layer deeper with human evaluation to tell the LLMs how to critique something, or provide some sort of positive or negative feedback. That way the LLM would have a better idea if something is good or bad.

Orcrambo
Автор

Hello, I have a question, whats the difference between this and telling a single agent to reflect on its answer

paulturyahabwa
Автор

Aside: What is the screen recording too you all use?

landon.wilkins
Автор

Very cool, however, i have a question about openai-4 using this reflection process, wont this effectively double the cost of the tokens? Is there a way to implement it without incurring huge fees, I am aware some increase is unavoidable, but something more manageable would be hugely beneficial.

georgemarinov
Автор

You started with Mistral in the "easy" first example and then casually moved to OpenAI in the next example. Why?

fernsmark
Автор

Great concept and explanation, what i do not understand in the Reflexion example is that you are using an example question here


example_question = "Why is reflection useful in AI?"
initial =


then seamlessly continue with the notebook to use another question in the graph phase in the end


events = graph.stream(
[HumanMessage(content="How should we handle the climate crisis?")]
)


i am trying to adapt this into a single query streamlit interface and not sure how to remove the example question while preserving the logic, would appreciate any help or guidance!

RADKIT
Автор

You lost me with "cLiMaTe CrIsIs!"

TreeLuvBurdpu