Mistral-7B with LocalGPT: Chat with YOUR Documents

preview_player
Показать описание
In this video, I will show you how to use the newly released Mistral-7B by Mistral AI as part of the LocalGPT. LocalGPT lets you chat with your own documents. We will also go over some of the new updates to the project.

If you like the repo, don't forget to give it a ⭐

#localGPT #mistral #mistral-7B #langchain

CONNECT:

LINKS:
Рекомендации по теме
Комментарии
Автор

Can you make a video ho how to use open source LLMs as chatbot on tabular data

anuvratshukla
Автор

Thank you so much for providing us with the updated code for mistral ! I have tested mistral vs. llama-2 chat, on long texts about philosophy, it seems in my case that llama-2 is doing better with understanding it atm. Thank you for developing this project !

Nihilvs
Автор

thank you for this valuable train. I want to ask you about the languages rather than English. What do you advice about write a LocalGPT in a non-english language?

Автор

nice video. how can we test the model with test data. how can we ensure that it is generating data correctly?

satyajamalla
Автор

Can you run this in langchane or flowise

maxamad
Автор

so you just implemented lama along with a RAG approach to the prompts right?

birb
Автор

thanks it is a good video, is there a suggestion to make the response faster . i tested wit Nidia GeForce RTX 3050

derarfares
Автор

Make a comparison of your project with "h2o gpt" project please

alx
Автор

Have you thought of having a colab notebook ?

timtensor
Автор

How to optimize the LLM model interaction timing?

ShaileshSarda-mz
Автор

cool is possible to use it in oobabooga text generation ui ?

Techonsapevole
Автор

Why is it called "GPT"? Does it use any API key to interact with GPT models? If yes, then why do you need other LLMs with it? If not, then what does it do that makes the other LLMs work like a charm? Like, just takes a document, and extract answers for unseen questions.

Sorry for my newbie question, exploring this topic for the first day.

SMFahim-vozn
Автор

I'm still unclear about what we do with these models once they are fine tuned on our data. Which or Where do we put this file, to be used by the public in a chat application say on wordpress? Customers don't want to log into terminal obvioiusly, they go to a site, and have a chatbot prompt them, and they want that chat bot to reply to them personally. Is there software already out there that can accept a fine-tuned-LLM? can you suggest one that doesn't have a subscription? preferreabley for WP.

gjsxnobody
Автор

the program is running with internet. instead can we run the local gpt without internet. please tell how to do that

llamamaguluri
Автор

Thanks!! Awesome video. Is there a way to do it in google Colab?

WilsonCely
Автор

If I ingest fileA and and then I want to create another gpt instance with different base knowledge, separate from the one earlier, should I just rerun the ingest with replaced files or I need to create separate conda environment?

filemonek
Автор

When I tested the code, it always returned Split into 0 chunks of text. Does anyone know what causes this

zhaojieyin
Автор

Hi - Thanks for uploading. Why do I get this error while running your model?

super().__init__(**kwargs)
File "pydantic/main.py", line 341, in
1 validation error for LLMChain
llm
none is not an allowed value

syedluqman
Автор

Thanks for showing RAG with mistral. Why your advise to use gptq instead of gguf when u have a gpu?

henkhbit
Автор

Hi, Is internet is required to run the model?

llamamaguluri