How to build a ChatGPT application with multiple PDFs using Flowise

preview_player
Показать описание
📄How to build a chat application with multiple PDFs
💹Using 3 quarters $FLNG's earnings report as data
🛠️Achieved with @FlowiseAI's no-code visual builder
🔍Tutorial also offers insight on optimizing results with use of meta information

See more tutorials and examples in Udemy course. Please use instructor link for discount code

Рекомендации по теме
Комментарии
Автор

what if i ihave hundreds of pdfs and if the number is growing dynamically? do I need to manually wire them? if yes then its quite inefficient. if no, then how do you properly display such dense connections? that sort of nullifies the whole point.

wasima
Автор

Hey Derek (and everybody), any idea how to link this "pdf chatflows" with a website or web-app (such as bubble)?
I tried making the CURL call from Bubble's API Connector plugin, and I can chat with the PDF, but the upload is from Flowise, not from Bubble.
I'm looking to integrate it so that uploads can be made from the web app.
Thanks!

JoaquinTorroba
Автор

Great video! Quick question that I cannot find an answer to anywhere. In Pinecone's documentation, they advise users to delete their index after use. While it might seem counterintuitive since an existing index could offer faster queries, the reason behind this recommendation likely relates to resource consumption. Pinecone charges for index usage, even when it remains unused or contains no data. Deleting unused indexes helps optimize resources and cost-effectiveness, especially for enterprise solutions dealing with a large number of vectors to search on. This does not seem like an enterprise solution to me if you are have to possibly embed hundreds of thousands of vectors for each query you perform, does it?

mikew
Автор

Hi Derek, thanks for the tutorial! I'm curious if this will upsert the PDFs to the Vector database with each call to the chatbot or only the first time? Is there some kind of caching going on? Thanks!

thewixwiz
Автор

How can we automatically add a pdf to vector store every time a member of my team uploads a pdf into a google drive?

jessebusma
Автор

Hi Derek thank you for your vid! how would this flow be when you also want info from external sources in combination with the pdf?

NexusNL
Автор

The verctor service is giving me an error, "Request to Ollama server failer: 500 Internal Server Error". I checked, Ollama is running fine.

consigiere
Автор

Great video again. Any reason you dont use the upload files by folder option? Thanks

musumo
Автор

superb video! thanks..may I know if I will smthg for it to remember the dialogue? memory? TQs

nntun
Автор

Hello, I am wonderng About something, when WE se a csv agent, WE don't need to use embeding, Vector data base or a memory ? I am currenly confuse

bourbe
Автор

Can you explain about the Recursive Character TextSplitter Settings? Why 500 and 200?

leonamnagel
Автор

any idea how to prompt chatGPT in this case to get a more personalized result?

chinchilla
Автор

hi there, after saving the flow I'm getting this <Error: Error: ENOENT: no such file or directory, open ''>, any idea what that could be? thanks

sylestra
Автор

Can you do something with hugging face models and embeddings?

nishantkumar-lwce
Автор

The outputs of these built using flowise is very bad. Have you noticed the outputs generated here?

nishantkumar-lwce
Автор

Hi, I'm running my Flowise in render, and I have many PDFs in one folder. When I use 'Folder with files, ' it asks me to input a path, but the problem is when using it with Render hosting, it never detects my path and gives me an error. I want to add a folder because I have multiple files.

jawadafzal
Автор

If we upload 10 file so it will work or not?

ghulam-e-mustafapatel
Автор

Great video! Do u know if there is a way to remove powered by flowise or customize it?

bevzkep
Автор

I'm receiving an 'Error: request failed with status code 429'. I assume it's because I'm using free versions of Pinecone and ChatGPT. Any ideas on a workaround?

rhinoclark