Unleash the Power of Local Llama 3 RAG with Streamlit & Ollama! 🦙💡

preview_player
Показать описание
This video is about building a streamlit app for Local RAG (Retrieval Augmented Generation) using LLAMA 3 with Ollama.

Join this channel to get access to perks:

Рекомендации по теме
Комментарии
Автор

Sir, grey on black is bit hard on the eyes, please help if possible. But grateful forever for your amazing knowledge sharing.

SasiKumar-sppy
Автор

Great video mate. how can I add chat history component to this. can u please make a vide on that as well. Thanks. Keep up the gr8 work

AkulSamartha
Автор

Thanks Sid.. Could you please share sample source code ? One suggestion. We need to expand this to reading multiple documents like PDF, CSV. XLS, XML etc.... That will make it as true RAG.. Also need to add listening to speech functionality....

KumR
Автор

Hey Siddhardhan!!
Such a nice and informative videos, have been following your videos since so long time.

I have a particular use case to work on using the above model..I had been working on this since 2 weeks, finished building the model which you had build in the above video.
Is there any chance of sharing the use case with you ??

simha-tyjn
Автор

Packages are not compatible. I can't even run the app with the mentioned version of packages. Which python version are you using?

tqnxndj
Автор

Thank you so much ! Can you do a tutorial using Azure AI Search ?

justine
Автор

Can we retrieve the answer and store it in an excel file ?

panteliskouridakis
Автор

is using FAISS embedings for splitting texts free to use.? what additional setting do we need.?

akshatanand