GraphRAG Ollama: 100% Local Setup, Keeping your Data Private

preview_player
Показать описание
🌟 Welcome to our channel! 🌟
In this video, we'll delve into the world of Graph RAG, demonstrating how to implement it using oLlama and LM Studio. You'll learn how to convert chunks of data into structured entities and relationships, improving the performance of your language model responses. This comprehensive tutorial will guide you through each step, ensuring you can set up and run GraphRAG locally on your computer. 🖥️✨ GraphRAG Ollama: 100% Local Setup, Keeping your Data Private

📚 Topics Covered:
Introduction to GraphRAG and its advantages
Setting up oLlama and LM Studio
Integrating both tools for effective Graph RAG implementation
Running models locally to keep your data private
Troubleshooting and optimising the setup for better performance

🔗 Useful Links:

📅 Timestamps:
0:00 - Introduction to GraphRAG
1:00 - Downloading and Setting Up Ollama & LM Studio
2:50 - Configuring Models and Embeddings
4:25 - Indexing Data with GraphRAG
7:20 - Querying Data: Global Search
8:20 - Local Search Issue

🔥 Don't Miss Out:
If you enjoy this tutorial, please like, share, and subscribe for more content related to Artificial Intelligence and data processing. Click the 🔔 bell icon to stay updated with our latest videos!
Рекомендации по теме
Комментарии
Автор

Great work as usual. Humble. Concise. Helpful. Perfect. 👌

anubisai
Автор

Hey! Cool video. I actually built a full local solution using Ollama, no need for LM Studio at all. Here's what I did: I created a proxy that translates between OpenAI API embeddings and Ollama's format, both ways.

The cool thing is, it works flawlessly for both global and local queries. I'd be happy to share the script with you if you're interested!

maxs
Автор

Can you please show a way to visualize the knowledge graph with an interactive UI?

SullyOrchestration
Автор

Looking forward to more on this, it is the most interesting cutting edge tech in AI and almost no one else on youtube is talking about it

MattJonesYT
Автор

It is essential to conduct a thorough preprocessing of the documents before entering them into the RAG. This involves extracting the text, tables, and images, and processing the latter through a vision module. Additionally, it is crucial to maintain content coherence by ensuring that references to tables and images are correctly preserved in the text. Only after this processing should the documents be entered into a LLM.

ignaciopincheira
Автор

I was eagerly waiting for this, big thanks

Gurdershan
Автор

Another great video about GraphRAG, good job.

NimaAmini
Автор

This is not all feasible on my computer but I would love move graph rag videos aiming more at how we can get this technology production ready.

girijeshthodupunuri
Автор

can you please show or explain how to get the visualization of the data ? looks verry good, and thanks for the tutorial

Gwaboo
Автор

Good stuff. As expected, on a Mac M2, indexing and global queries are quite slow. Local queries are doable because it's usually just one LLM call after the similarity & graph search.

GeertBaeke
Автор

Thank you for this tutorial. Very useful..

sharankumar
Автор

Mervin, Hi from New Zealand, I see that took 20 minutes to index…. what are the specs of your machine?

macjonesnz
Автор

So compared to GPTs, his search generation effect will be better?

xinzhang
Автор

At 7:10 I believe the reason it's giving errors is the url in the settings file is missing the word embeddings at the end. It probably tested some different urls until it figured it out.

MattJonesYT
Автор

What a perfect video to wake up to after yesterday's video :) I'm starting to think that we're abusing graphRAG here, all of us. You see, and I may be wrong I'm still a n00b here, we are not using semantic chunking and also, for those of us with thousands of files, say transcripts, feeding graphRAG a summary and tags might be good enough for a recommendation engine and if the user wants to dive in, then you use rag but you create a rag for each main collection of documents. So the graph rag may be able to list say what cooking classes you can take much faster and then querying each class that is its own rag for details should be also much faster and overall cheaper? What do you think?

lesptitsoiseaux
Автор

Nice and useful video, but still not getting one thing. You made this video around 3 weeks ago, but in april, ollama released some embedding models. Then how we are saying it is not having embedding compatibility?.

BatukeshwarVats
Автор

quick question, I already have a folder of embeddings and chunks, can I just pass the documents and embeddings to GraphRAG ?

nikhielsingh
Автор

Can you create a video on how to use GraphRAG with the GROQ API? Looks like nobody has done it yet. Thank you.

codelucky
Автор

Hi, how do you fix the issues with running local search using command line?

song
Автор

What is the average query time that you were experiencing with the global/local search?

mllearning-qcdt