Chat With Websites Using ChainLit / Streamlit, LangChain, Ollama & Mistral 🧠

preview_player
Показать описание
In this video, I am demonstrating how you can create a simple Retrieval Augmented Generation UI locally in your computer. You can follow along with me by cloning the repo locally. You can also use LangSmith for tracing the LLM calls and use LangChain Hub for using already available prompt template for different models, for this case, mistral. I am also showing how easily you can switch between models locally.
Open Source in Action 🚀
- Mistral is used as Large Language model.
- LangChain is used as a Framework for LLM
- Mistral model is downloaded locally using Ollama
- Chainlit and Streamlit is used for deploying 2 different chat UI.

👉🏼 Links:

------------------------------------------------------------------------------------------

------------------------------------------------------------------------------------------
🔗 🎥 Other videos you might find helpful:

------------------------------------------------------------------------------------------
🤝 Connect with me:

#langchain #chainlit #ollama #mistral #rag #retrievalaugmentedgeneration #chatgpt #datasciencebasics
Комментарии
Автор

Can’t wait to try this. It’s perhaps the best intro I’ve seen, especially for python noobs like me.

The Langchain and langgraph examples are great, but the Jupyter notebook just kills me. Very painful to convert those to decent code.

jofus
Автор

This is great, and works very well! I have tried it with several 13b parameter models

rgm
Автор

My main goal is not to chat with one or more HTML pages referred by URL(s), but entering a URL of the home of eg. an online doc, crawl, scrape and process that and chat with ALL the pages of that.

attilavass
Автор

Thanks for sharing quality contents,
I have a query - please share some videos on creating a Q&A system with local pDFs, web pages etc with locally stored LLMs also use llmaindex and langchain.
Thanks

SantK
Автор

Thanks so much for your tutorial! Is it possible to stream the tokens and also return the sources at the end of the response?

pauldelage
Автор

how can i deploy the application streamlit with llama3 model

hajarelkadiri
Автор

May i know if url needs any authentication, like a company confluence page, how we can do in that case ?

jyothhiswaroop
Автор

its taking too much time 15-20 min to get the result

Arunkumar-qfit
Автор

what's the ideal cpu/gpu setup to run this on my pc?

SoloJetMan