How I built a Multiple CSV Chat App using LLAMA 3+OLLAMA+PANDASAI|FULLY LOCAL RAG #ai #llm

preview_player
Показать описание
In this video, we'll delve into the boundless possibilities of Meta Llama 3's open-source LLM utilization, spanning various domains and offering a plethora of applications. Today, we're focusing on harnessing the prowess of Meta Llama 3 for conversing with multiple CSV files, analyzing, and visualizing them—all locally, leveraging the power of Pandas AI and Ollama, seamlessly integrated within the Streamlit framework. And the best part? Your privacy is ensured as everything is processed locally.

PandasAI employs a cutting-edge generative AI model to comprehend natural language queries, translating them into Python code and SQL queries effortlessly. Witness how it interacts with your data.

This tutorial entails an in-depth application overview, step-by-step guidance on installing Ollama on your local machine, and a comprehensive demonstration of crafting the application with Streamlit.

So, without further ado, let's dive into the demo and unlock the full potential of Meta Llama 3!

#ai #llm #fullylocal #llama3 #opensourcellm #localllms #generativeai #csvchat #chatbot
#streamlit

LINKS:
Рекомендации по теме
Комментарии
Автор

How to get to know the api_base of my local LLM?

belajardata
Автор

I have a Nvidia GPU but the response takes too long only to ask how many records. I have only 1000 records. I wonder why it is so slow.

voedito
Автор

16:41 did the llm always generate/draw the charts that you wanted? Mine rarely does it

pneydny
Автор

How can i expand the limit of 200MB. My CSV Files are larger then 1GB.

lowkeylyesmith
Автор

Hey man, some issues what about, LOSS OF CONTEXT when you have 8000 token in put, lets say you max out the input token.

We need to also have rag for long form content injection please.

What is the best way to do this?

criticalnodecapital
Автор

Can it also deal with 10, 000 rows of more

satvikbisht
Автор

this does not work the way it should !

gnosisdg