Google Gemma Fully LOCAL RAG ChatBot using Ollama|LangChain|Chainlit|Chat with Docs #ai #ollama #llm

preview_player
Показать описание
Join us as we dive into the exciting world of AI with the new Google Gemma opensource model, powered by Ollama, to construct a cutting-edge rag application with memory capabilities. Utilizing Chainlit and Langchain.

Our application allows you to seamlessly upload PDF docs and engage in conversations with them like never before.

What sets this video apart is our exploration of Gemma, a lightweight open model developed by Google DeepMind and other Google teams. Born from the groundbreaking Gemini models, Gemma comes in two sizes: Gemma 2B and Gemma 7B, each offering pre-trained and instruction-tuned variants.

But that's not all! We're harnessing the power of nomic-embed-text from Ollama as our embedding model. With an impressive 8192 token context window, nomic-embed-text outperforms other models like OpenAI text-embedding-ada-002 and text-embedding-3-small on both short and long context tasks. This makes it the perfect choice for embedding in our project.
#opensource #localllms #googlegemini #gemini #langchain #model

LINKS:
Рекомендации по теме
Комментарии
Автор

Going to try to follow this during the week

It’s exactly what I want to build.

Thanks for the tutorial hopefully it’s easy to follow!

samson
Автор

Awesome, this is the tutorial that I've looking for!

Wondering if you could make another video based on this, but instead of upload a pdf to begin with, have some pdfs/documents stored in the database already before the chat

xchrisliu
Автор

How to let this app run on Hugging Face Spaces?

You have to perform
ollama pull …
on Hugging Face Spaces, but how to do that in that environment?

scitechtalktv
Автор

Hi ! Thank you for sharing ... I follow all the steps but I always get "The text does not define the user's question, therefore I cannot provide an answer to this query. The text does not provide information about the classification of depression as a diagnosable disorder. Therefore I cannot answer this question." ?!

bhamadicharef
Автор

Can you try fine-tuning gemma model using custom dataset thank you, I tried fine-tuning using Lora but didn't work properly

vsudbdk
Автор

Hi man! Great tutorial!
I just had a question, i am loading a opensource fine-tuned version of gemma which is basically made for a specific language, but it is not available on ollama. when i run it i get this error: Ollama call failed with status code 404.
Thanks man!
Also man, i want it to be like if i upload the document once, everytime i start the app it is initialized, i dont want repeated uploading. is it possible?

DoomsdayDatabase
Автор

i am getting the error got multiple values for keyword argument 'callbacks'" after uploading a pdf and asking the question, any suggestions?

THE-AI_INSIDER
Автор

What local machine spec is this running on please?

nickwoolley
Автор

When I try to give the message like: Hi, Hello. It goes into the database and looks for the similar embeddings and returns the output. How to handle these things? Rag + Greeting handling

VivekShinde
Автор

Sir, can you please share me the exact pdf document that you are using in this video, as I am working on a similar project but with a low end pc.

msvn
Автор

hii there, pdf file is not being uploaded. Shows only processing <filename>.Any fixes?

sreeharinittur
Автор

what's the env? which api key are you using

kurama_dp