New PrivateGPT 2.0 with UI | Chat with your docs securely, completely offline, free and without GPU

preview_player
Показать описание
Private GPT got an update and now it gives you a very easy to use out of the box UI by using which you can talk to your private documents completely off-line and absolutely free. This will entirely work on your local environment without GPU.
Your data will not leave your local environment at any step. This entire process will be completely free as we are going to use open source, large language models as well as open source embedding models.

Chapters:
00:00 - Introduction
00:34 How to install new updated privateGPT 2.0
03:37 - Open Source LLM and Embedding models
05:25 - Running Update PrivateGPT UI
06:03 - How to use Gradio UI of New PrivateGPT
07:18 - How to Change LLM and Embedding Models
07:57 - Thank You

Checkout my similar videos:

YOU CAN FIND ME HERE AS WELL :
Рекомендации по теме
Комментарии
Автор

hi! pal could you create a video of how to install and use it without programming knowledge at all?

nadanimestousant
Автор

Thank you! Great video! Quick and easy to understand. Thanks!

jttech
Автор

Thank you, this was an excellent tutorial.

salmajane
Автор

Thank you for sharing, have a great day :-)

lalpremi
Автор

Could you create a video tutorial demonstrating the setup process for the Mixteal 8x7B model on PrivateGPT?

tamerellamushi
Автор

Very handy video, thankyou!.

can you maybe make a video where you make your own where you can not upload files but have pre defined files what you could use?

So it just answers true your the file which is pre defined

chopperchopper
Автор

Hello sir I have a dought ..

How to connect lamma2 with langchain,

If we made our own llm model how to connect it with private gpt

jaswanthroyal
Автор

Hello Abid! I loved both of your videos! I wonder though... Can it accept directly a dataframe or a json through its rest API?

FullSpectra
Автор

while downloading cmake dependencies, there is error for nmake dependencies. how to solve?

krishbhat
Автор

for the CMAKE_ARGS command, it isn't recognised in windows. Do you have an alternative for that?

jztjmuf
Автор

Hey I'm curious, commenters please remember you were new one time too 😂 but sir, what happens if these models go offline, get banned, locked down, etc - like this private gpt the files are local but it is calling on openai or mistral model online right?. What happens if they aren't available in the future? We can simply change the model to one that is available? Sorry if this is a dumb question I'm just hoping to build an assistant that doesn't rely on the politics or regulations of the future

CHURCHGPT
Автор

i get this error how can i fixed it "'CMAKE_ARGS' is not recognized as an internal or external command,
operable program or batch file." to resolve this i install cmake and then give path in the envirnment variable even after that i get same error

ishukatiyar
Автор

Do you provide installation services? I need your help to install it on my MacBook pro M1.

travelwithashraful
Автор

I have built it finally, now biggest question how to deploy as site or exe

martiancoders
Автор

error : 'PGPT_PROFILES' is not recognized as an internal or external command,
operable program or batch file

nickjonas-zemm
Автор

Hey Abid, I just saw your previous video "Coding a private gpt" Can you please make a new video explaining "coding a privategpt2" This new version code of privategpt is completely different from the older one and difficult to understand.

jayeshkumar
Автор

poetry install --with ui is throwing an error while installing networkx.

Installing networkx (3.2.1): Failed

FileNotFoundError
1040│ the built-in open() function does.
1041│ """
1042│ if "b" not in mode:
1043│ encoding = io.text_encoding(encoding)
→ 1044│ return io.open(self, mode, buffering, encoding, errors, newline)
1045│
1046│ def read_bytes(self):
1047│ """
1048│ Open the file in bytes mode, read it, and close the file.

Cannot install networkx.

Kevinsey
Автор

CMAKE args is giving me error when I am running command it is saying
Usage:
pip install [options] <requirement specifier> [package-index-options] ...
pip install [options] -r <requirements file> [package-index-options] ...
pip install [options] [-e] <vcs project url> ...
pip install [options] [-e] <local project path> ...
pip install [options] <archive url/path> ...

no such option: -n

COMMAND USED. = pip install --force-reinstall -no-cache-dir llama-cpp-python"
Can you help

sudeepsanjaykumar
Автор

Hi Abid,
Can you suggest fast embeding model for embeding for free i have used Hugging face models it very slow and my program get crashed whenever I try trun te code.

venky
Автор

The letters are too small, making it difficult to follow the tutorial

RogerioC