Chat With Documents Using ChainLit, LangChain, Ollama & Mistral 🧠

preview_player
Показать описание
In this video, I am demonstrating how you can create a simple Retrieval Augmented Generation UI locally in your computer. You can follow along with me by cloning the repo locally. You can also use LangSmith for tracing the LLM calls and use LangChain Hub for using already available prompt template for different models, for this case, mistral.
Open Source in Action 🚀
- Mistral is used as Large Language model.
- LangChain is used as a Framework for LLM
- Mistral model is downloaded locally using Ollama
- Chainlit is used for deploying.

👉🏼 Links:

------------------------------------------------------------------------------------------

------------------------------------------------------------------------------------------
🔗 🎥 Other videos you might find helpful:

------------------------------------------------------------------------------------------
🤝 Connect with me:

#langchain #chainlit #ollama #mistral #rag #retrievalaugmentedgeneration #chatgpt #datasciencebasics
Рекомендации по теме
Комментарии
Автор

Thank you so much ! Watched a lot of videos online and yours was the only that helped me get it up and running :)

mohamedemarah
Автор

Love it thanks! Been fighting a bunch of programs to get open models to work and this saved my life!

bp
Автор

Thank you so much. Seeing the possibilities of LangChain and ollama is so great. Is it possible to connect this with a database, say bigquery so that you could chat with the database or table and get responses. It would be great to see a video on that. Thank you so much for your great work.

chibuzoemelike
Автор

Thank you, subbed to your channel. Can you also please give a tutorial on other topics of chainlit like Chat Generation, Elements (Image, Pyplot, TaskList etc., ) and AskUser section. I'd love to see more of your videos on "ChainLit, LangChain, Ollama & Mistral" series, they are easy and very helpful. and loading previous complete chat interaction on chainlit will be interesting video.
I'm also trying to connect my React frontend app to chaint using this combination, I'm stuck at changing chat profiles from React frontend to be able to use Mistral and GPT4all at when selected from tab, if you can make a video on this, its also very helpful and again thank you for the video... :)

mrweeed
Автор

How can you enable streaming and render the sources appropriately. put an issue on your github at my attempt at this.

atrocitus
Автор

i am not able to run it, when i upload a document, it just keeps processing forever
please help me

genix
Автор

Thank you for you videos. They are very informative. Could you please also create a video that explains how to use "BioMistral" with Ollama?

IamalwaysOK
Автор

i have been asked to build poc on document search:
i have used streamlit and build the simple ui application using google ai api and gemini pro..

now i have to implement below functionality:

"Should have design & approach of the solution
Two types of user to use this document search bot.
Admin & Normal User.
Admin should be able to upload and delete documents.
Both user should be able to ask query
The response should match the query asked
Have test cases created for validation"

please help me out with the source code how to start and build

spacespectale
Автор

🎯 Key Takeaways for quick navigation:

00:00 🛠️ *Fourth video in an AMA series demonstrating creation of a simple Retrieval Augmented Generation UI using various models.*
01:07 🧩 *Utilizing AMA, Lang chain, ChainLit, and Mistral for deploying applications and tracing.*
03:25 📦 *Setting up a virtual environment and installing necessary packages for the project.*
06:43 📜 *Installing AMA models locally for use in embedding and generating responses.*
09:54 🤖 *Implementing a chat interface for interacting with documents, leveraging retrieval QA chains.*
10:48 🖥️ *Running and interacting with the created application to ask questions related to uploaded PDF files.*
14:04 🔄 *Exploring additional functionalities for uploading PDF files directly onto the UI for interaction.*
19:36 🎥 *Encouragement to like, share, and subscribe for more content, with a focus on practical application and experimentation.*

Made with HARPA AI

MarcvitZubieta
Автор

You saved my life, can you provide 1-1 sessions?

dprasen
Автор

what about this error :
Collection langchain is not created.

achrafrahime
Автор

what customization are you using in your terminal?

AI_GB
Автор

Thank you. how to run this in offline mode locally ? I mean, without internet.

kaleshashaik
Автор

hi, is this solution is private? so all data, vectors, llm, etc. run locally?

Alkotas
Автор

When running the ingest.py I am getting this error : ValueError: Error raised by inference API HTTP code: 404, {"error":"model 'mistral' not found, try pulling it first"}. Help please

fighter
Автор

from where do i get the langchain api ?

shashankpandey
Автор

chain.acall(message.content, callbacks=[cb]) chain.acall is now deprecated and ainvoke does not work with chainlit because it doesn't you that it's process "ReterivalQA and show the steps.

sunilanthony
Автор

the file keeps processing forever. How to solve this? please help

NehaH-uy