Simple App to Question Your Docs: Leveraging Streamlit, Hugging Face Spaces, LangChain, and Claude!

preview_player
Показать описание
THIS IS A REUPLOAD: The original title/description/thumbnail of the video were not representative of the content, so I recreated the video to be more clear. This is a non-comprehensive tutorial - but you can look forward to more in-depth tutorials for LangChain in the coming weeks!

We create an app to upload Canadian bills and ask the AI questions. Using Streamlit and Langchain, you can quickly build and deploy AI assistants without needing machine learning expertise.

About me:

#generativeai #anthropic #claude
Рекомендации по теме
Комментарии
Автор

Hey all!

Just want to be sure the content I upload is well represented by the thumbnail/description/title!

chrisalexiuk
Автор

Awesome app Chris. Would love to see more content on building applications with LangChain.

dipankar_medhi
Автор

This is great. I’m in New Brunswick and was thinking of pulling in the provincial bills etc.

tradingwithwill
Автор

Love it! In addition to Langchain, it' be great if you can show how to wrap Langchain apps with front end frameworks like React/Flask/NextJS etc.

awa
Автор

Very cool. I'm looking forward to the langchain video(s). Also would be cool to show how to self-host an app like this.

guyindisguise
Автор

Very nice video!!! It would be great if you could make a demo of a chatbot integrated with a mysql or some database, to search for information for the chatbot, so that it can take reference of prices, products and stock for example to respond to customers. For now I have not found such a tutorial. Thank you very much.

DiegoNaranjo
Автор

Great video Chris. I am waiting for the access to Claude. 100k context window is a game changer in my opinion. This is a seismic change from 8k context window GPT4 currently offers to most users. In terms quality of responses, is Claude at the same level of GPT4? I have not seen any videos comparing their performance so that is why the question. Thanks.

sanesanyo
Автор

How do we keep asking questions. Continuing the chat down the page with followup questions.

sportscardvideos
Автор

Thanks for your great videos!!! There is one thing that I have not understood. If I just have one document that fit within the token limits, as with Claude in your video, I guess you will have the best possibilities to ask question to your file (i.e. get the best possible answers from the text). But if I want to ask question to several files or files larger than the token limit, then I can split the text into chunks and make a vector search. Is there an option to instead fine tune a model using Lora to include my documents? If so, would that be a more efficient/correct way for asking question on my documents, or is vector search better? Is uploading the complete document as you did the best (most correct way, provided the documents fit the token limit)?

patrikpatrik