Build an Agent with Long-Term, Personalized Memory

preview_player
Показать описание
This video explores how to store conversational memory similar to ChatGPT's new long-term memory feature.

We'll use LangGraph to build a simple memory-managing agent to extract pertinent information from a conversation and store it as long-term memory via parallel tool calling.

Interested in talking about a project? Reach out!

Timestamps:
0:00 - Intro
0:16 - Memory Demo
3:21 - Approaches to Adding Memory
9:59 - Code Walkthrough
19:54 - Optimizations

Follow along with the code on GitHub:
Рекомендации по теме
Комментарии
Автор

Great code walk through. I'm working on something similar so it was cool to see how you approached it. Thanks for sharing.

jacobgoldenart
Автор

Thanks for the excellent idea and explanation.

ratral
Автор

Excelent content! Thanks for sharing, man :D

joao.morossini
Автор

Dude, your video help me a lot, THANKS!!!

weiwei
Автор

We were just talking about this issue. All of these chat UI's treat content as if it's disposable. What works in text messaging between two people doesn't translate as well when working with computers, not if the information has utility value. This is great!

jeremybristol
Автор

Nice. Look similar to Autogen's teachable agent. Appreciate your work.

mtthias
Автор

This is so great: structured data extraction from conversations is something I’m also working on. And by the way: congrats: you have a better like-to-view ratio than MrBeast, at 4%. 🚀

JulianHarris
Автор

Hey thanks for this. Very nice to see an actual application build up like this.
I will most certainly come back for more.
I would be interested in seeing you set up a more corporate oriented use case. What if a free text field in a form contains information relevant to a sign up for a a services or something like that

madelles
Автор

your miro drawing skills are next level thing

talhaanwar
Автор

Can you share the vite front end? or how you setted up the front and the backend?

benjaminbascary
Автор

Hello this is so cool, is it possible to share the full code including backend/front-end code as well, would love to try this, thanks so much!

jackmartin
Автор

This is amazing, I creating something similar, Now I will use sentinal approach. btw, I would like ask, where can i find the frontend?

khushpatel
Автор

Wow, this is super cool. Could something like this be applied to using an LLM to code a web app? One of my problems right now is that I'm using gpt4 to help code a project but before I can get through it, the context window creeps in and it starts to return incorrect code.

adventurelens
Автор

Thank you, Aiming to build a similar one for a different use case. If you can share the code with appropriate licensing would be helpful !

selvakumars
Автор

I think I missed this in the memgpt paper but summarizing and storing attributes in long term memory and then refetching it for the context window is also likely going to increase the latency of the main response

akashdeb
Автор

Great work, don't you mind to share the code for front end? ;)

unclecode
Автор

Would you be able to share the doc of Mem GPT?

kaisaiokada
Автор

Interesting. Subscribed. By the way, can I get the source code for the UI. It looks pretty neat and clean

tharunbhaskar
Автор

Cool channel & video!! May I ask how long it took in order to get some views on your videos? Did you got views within 24 hours? Or did you start to get some views after X uploads?

zhrannnnn
Автор

How do you store the memories ? I prabably implemented it wrong, I did a while loop with input, now trying to come up with a solution of how to actually store the messages in the memory

michaelbuloichyk