LangChain Tutorial (Python) #5: Adding Chat History to Chatbot

preview_player
Показать описание
#openai #langchain

We can supercharge a simple Retrieval Chain by including the Conversation History in the chain and vector retrieval.
This will allow users to ask follow up questions, and the model will be able to recall information from the chat history.

☕ Buy me a coffee:

📑 Useful Links:

💬 Chat with Like-Minded Individuals on Discord:

🧠 I can build your chatbots for you!

🕒 TIMESTAMPS:
00:00 - Intro
00:37 - Project setup
01:29 - Grab user input from terminal
03:45 - Add a Chat Loop
04:43 - Add Chat History
05:17 - HumanMessage and AIMessage Schemas
07:07 - MessagesPlaceholder
08:00 - Dynamically build chat history
08:52 - Add History to Retrieval
09:15 - Intro to Langsmith
09:35 - Intro to History Aware Retrievers
10:23 - Add History Aware Retriever
11:36 - Retriever Prompt
13:04 - Display generated query in Langsmith
14:12 - Congrats!
Рекомендации по теме
Комментарии
Автор

I hope you guys enjoy this video!
Let me know down in the comments which document loaders you used in your final chatbot.

Also, would you like to see a dedicated video on Langsmith?

leonvanzyl
Автор

This is the single best series on langchain in youtube

ibrahimmansoor
Автор

Best short series on langchain and each video cover one topice from very basics to advance step by step.🏆🏅🎖🎉🥇😊. well done bro.

muhammadmursalin
Автор

This is outclass! The way you go through everything is just seamless and makes so much sense. Kudos

optimyse
Автор

Well done. The best simple RAG walk through I have seen!

paulmiller
Автор

Congratulations, Leon! This series are an awesome introduction!!! Looking forward to see all your videos!

infobarbosa
Автор

Thanks Zyl for this great series of videos. It really help me to understand the concept of LangChain and LLM.

AdityaMishra-mc
Автор

You’re my #1 go-to for tutorials because you start simple to explain the concepts then build up the complexity. Hope to bring you into a project. I had abandoned langchain and RAG in favor of what I could get directly with an API assistant by uploading a file to the thread and let it manage the CSV conversion and chat history. I was beginning to think of langchain framework as more relevant pre Nov 9th. One thing I have zero clarity on is WHEN I ought to use langchain and the RAG you outlined vs when I should use Assistants API (beyond ability to easily change models). I can’t find that info anywhere. That breakdown would be a great tutorial

brianmorin
Автор

The fact that you managed to cure my headache is just... amazing! This walkthrough is what $1000 dollar courses teach you. You are amazing, you earned yourself a subscriber for a long time. Please do more python courses. And please do more Langchain examples. It's going to be huge in the next year and it will help you for sure to grow as a YouTube channel!

viktormetasoft
Автор

Really awesome video 🎉. Very helpful.
Now I can finally finish up my company's assignment 😅

gouravojha
Автор

Another great video ! Have you figured out a way to get something like this integrated with WhatsApp or even using Flowise with WhatsApp?

altonjones
Автор

Huge thanks, excellent video and explanation!

israeabdelbar
Автор

Thank you so much for this tutorial. Could you just explain why are we using another API call (LLM) to feed message history with "history aware retriever". I'm just not getting that part, it seemed like you were getting the perfect result before using it? Isn't it just enough to feed the message history to the prompt next time you give the LLM a question? That way we are using one API call per question, and this way we are using an extra call just to give chat history to the retriever? Am I missing something? Why does the retriever need chat history when it can be given in the next prompt?

Thank you again, it was very simple to follow along and understand, I will make sure to share this tutorial with my colleagues!

ejs
Автор

Thank for another great video. I am just confused, I was thinking to use semantic kernel for such kind of chunking and embedding and other stuff, but with ur provided solution, it would be much simple and easy as well. Correct me if I am thinking wrong. And one more use case for which I wanted to create ChatBots for my company, is related to legal team. This legal team has their large pdf documents with information and want to get the almost exact answers from the documents every time, which I think is not possible. I explained them that It could be symentically same but wording can not be 100% same. Enlighten me a bit more for this please.

atifsaeedkhan
Автор

Thank you for the video !

Someone could explain about the "Context" variable that is defined in system prompt ? I didn't understand where we set the value

rafaeldurbanoneto
Автор

This is really a well explained tutorial. thanks a lot! I am just wondering why the system is answering to questions which are totally out of context. E.g. I am asking about the planets in our solar system and it is answering properly. how can I know that the information provided is explicitely taken from the document retriever and not from the LLMs basic knowledge? Thanks!

michaerdmann
Автор

I think this might be a dumb question but, is langchain's methods not applicable to other llms? like so far I've only seen gpt model being used with it.
ps. amazing content you're putting out! thanks a ton for this

ruffy
Автор

Thanks for this great great video really amazing 😍🎉

bangarrajumuppidu
Автор

Hi Leon, hope you are well & Jhb not as hot as it is here in Cape Town at the moment. Quick question, Followed your tutorials closely, only difference is I used my own url (my employers website, the about page). Before I added history, I was getting brilliant results, but after chat history, the quality of the results deteriorated significantly. Do you have any suggestions? Great series by the way, I am going to also follow your updated flowise closely.

WayneBruton
Автор

What if we use the a pdf document for our RAG and we wish to return the source doc info along with the response. How can we do that in this chain?

NavjotMakkar