Chat with Docs using LLAMA3 & Ollama| FULLY LOCAL| Ollama RAG|Chainlit #ai #llm #localllms

preview_player
Показать описание
Welcome to our latest YouTube video! 🎥 In this session, we're diving into the world of cutting-edge new models and PDF chat applications.

Join us as we harness the power of the new Llama3 model, developed by Meta Inc., and the nomic-embed text model from Ollama to create a seamless chatting experience with PDF documents with 100% privacy.

Powered by Ollama and Chroma as the vector store using the Chainlit framework, this application allows you to upload PDF chat documents and engage in conversation within them.

Discover the magic of Llama3, available in two sizes: 8 billion and 70 billion parameters, either pre-trained or instruction-tuned. These instruction-tuned models are finely calibrated for dialogue and consistently outperform many existing open-source chat models.

Nomic-embed text surpasses OpenAI's text-embedding-ada-002 and text-embedding-3-small in both short and long context tasks.

If you're eager to explore further, we've previously showcased PDF chatbots utilizing different models, so be sure to check those out too!

Before diving in, ensure your local machine is equipped with a minimum of 10GB RAM to run this application smoothly. And don't worry, we've got you covered with a step-by-step guide on installing Ollama.

Get ready to explore the architecture of our application, followed by an exciting demonstration. Let's get started! 🚀

#ai #generativeai #langchain #llama3 #ollama #localllms #chatbot #llm
#largelanguagemodels

LINKS:
Рекомендации по теме
Комментарии
Автор

amazing video first video found after surfuing for 2hrs on youtube

akshatanand
Автор

where can we locate the data that is been stored in the chromadb? like the embeddings, vectors etc in local system?

gowithgaurav
Автор

Awesome tutorial. I have setup privateGPT and localGPT and trying to pass in a text file of my whatsapp chat, which is around 1.9 mb. now when I query it, I only see data till the last 1 year and not before it. It gives the error - "Initial token count exceeds token limit" . Can you make a video on how to ingest large text/pdf files, like of the size of 40-50 MBs and is it better to have multiple files or a single file ?

animusdsouza
Автор

@DataInsightEdge getting error as, when asked the same question
Definition of the concept of stigma
Error:
The provided text does not contain any information regarding the definition of stigma, so I am unable to provide a definition from the given context.

adityaadi
Автор

3. Obtain an API key from OpenAI and add it to the `.env` file in the project directory.
```commandline

PANDURANG
Автор

Excellent Stuff.I could setup RAG.But it is taking nearly 7 minutes to answer any query from PDF files.I am running application on 16 GB system. Shall i have to increase RAM to improve the you please let me know????

pssab
Автор

when i upload the pdf it get processed but when i asked any question it says "ollama call failed with status code 404" how to solve it?

yusufdabir
Автор

Super, subscribed, PLz provide source code

sreeharinittur
Автор

I got error :2024-05-11 11:57:17 - Collection langchain is not created. Can help me on this ? Thanks

DavidTanSin
Автор

Error
Ollama call failed with status code 404.

bigshark