PrivateGPT: Chat to your FILES OFFLINE and FREE [Installation and Tutorial]

preview_player
Показать описание
In this video, I will show you how to install PrivateGPT on your local computer. PrivateGPT uses LangChain to combine GPT4ALL and LlamaCppEmbeddeing for information retrieval from documents in different formats including PDF, TXT and CVS. The list of file types can be easily extended with PrivateGPT.

LINKS:

▬▬▬▬▬▬▬▬▬▬▬▬▬▬ CONNECT ▬▬▬▬▬▬▬▬▬▬▬
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬

All Interesting Videos:

Рекомендации по теме
Комментарии
Автор

Wow, dude thanks, you basically showed me what I needed to know for many ideas I had awhile now! Excellent how to!

jidun
Автор

This is just what I've been waiting for!

caiduncan
Автор

Please be aware: Having tried this and spent hours troubleshooting the install, I finally got this working. Here's whats important to know; -It runs on CPU not GPU. It's nighmarishly slow on CPU. several days indexing 2 lengthy PDF's. I havent figured out how to run this on the GPU yet. Have a RTX3090 so will share updates if I can mod the python scripts to work with GPU eg using (Ray OR PyTorch) for using tensors etc. After several days still processing my two PDF's, I might need to pull the plug on the process and do this again once I fugured out how to run this on the GPU (Ingest/Embedding/Indexing)

Mechagen
Автор

It was this kind of information that I was looking for after I found your channel, I didn't have to search the internet anymore because your channel is the best and most complete on the subject, I'm studying your videos and I'm going to join the discord.
.
Thanks!

wostonbogoz
Автор

This man needs more subscriber .. you are doing such a great job man🙌🙌🎇🎇

swarnodipnag
Автор

Almost exactly what I contacted you about! Brilliant! I will use individual family history documents for this application. My plan is to have an animated face avatar from a photo of the person who's history I'll provide in the documents and a speech output of the answers. Input will be by voice to text. So each session will be like a slow conversation.
Next I would like the bot to be able to access the web for questions relating to the world history of my ancestors. E.g. a question like "what years was WWI fought?" would possibly not be in the family history, but I would like my ancestor to be able to answer that.
I look forward to your follow-up videos!

JoeInBendigo
Автор

Hello. Awesome videos. Can you explain how you run this using GPU? What models are you using to run on GPU? What GPU also?

paulweiss
Автор

I don’t plan on doing this since I’m unfamiliar with much of the terminology, but it’s really interesting to see how stuff like this is created.wish I could follow and make one for myself 😂

tiana_roseee
Автор

Thank You very much for this precious content.
Have you a text tutorial on your site?

MarioBarretta
Автор

TypeError: expected str, bytes or os.PathLike object, not BufferedReader when using PDF files, but working flawlessly when used with .txt file, any solution?

sorcerer
Автор

I have tried the code but with Italian documents it doesn't work fine.
What llm model and "embeddings model" do you suggest?

francescopaganelli
Автор

Where I can take this Image-Schema which you show, how everything is connected

alterpub
Автор

Thanks a lot for this one @prompt, very useful as usual, this makes the procedure easy-peasy for us. I pushed over 1000 pdf files into it and it is still running, so I am still with the ingestion... I guess I will be able to check the query-answer part later in the day, but I wanted to ask you (or anyone else reading this) if you tried to update the corpus of data with new docs... I have to add the new docs to the ingestion-folder and run the ingestion again, I guess I should leave there the already processed docs, do you know if the app will be able to recognize the processed docs so they are not reprocessed again?

joser
Автор

I’ve been looking for something like this for a month now. So far everything I’ve tried hasn’t worked and the only commercially available one doesn’t allow me any control over getting my data out if I wanted to back it up. Plus it’s all sitting on another company server. I shall be cautiously optimistic as I go through this.😉

MikeD-tfdk
Автор

Pretty good intro.

I'm trying to get this work on my mac but keep running into problems with packages.
First with dotenv, and then langchain.
Anyone else having similar issues?

joepoptiya
Автор

Thanks great as usual, what are the versions you are using of Python and Numpy, In trying to build a Dockerfile, but always fail once to come for installing numpy 😢

HasanAYousef
Автор

your code is superb but i have one problem query time takes too much time how can i reduce this?

nnsgogw
Автор

Has anyone gotten this working? If so have you trained it on your own datasets? If so how accurate is it in returning valid output from prompts?

Karl_with_a_K
Автор

There's a huge news from Kuleshov in open source community, they've made a tool LLMTUNE, which allow training on consumers GPUs. Maybe it's the danger Altman see with his licensing idea to Congress, models are getting into people hands.

fontende
Автор

I tried with documents in French, but it does not work, there is a model for French to download?

clxymox