GraphRAG with Ollama - Install Local Models for RAG - Easiest Tutorial

preview_player
Показать описание
This video is a step-by-step tutorial to install Microsoft GraphRAG with Ollama models with your own data.

🔥 Get 50% Discount on any A6000 or A5000 GPU rental, use following link and coupon:

Coupon code: FahdMirza

#graphrag #ollama

PLEASE FOLLOW ME:

RELATED VIDEOS:

All rights reserved © Fahd Mirza
Рекомендации по теме
Комментарии
Автор

Graph RAG cost a lot indeed on API calls.
One of your best video I do believe.thanks a lot

sergeziehi
Автор

I was having the api issue and also was in the 'ragtest' folder. I ran 'python3 -m graphrag.index --root .' whilst in the /ragtest directory instead of 'python3 -m graphrag.index --root ./ragtest' (15:14mins) and it seemed to work. Thank you for this video Fahd. Appreciate all the work and sharing!

TelB
Автор

Excellent work - got a working example going!

georgeknerr
Автор

Thanks Fahd for your hard work! Very interesting!!! 1) is it possible to link GraphRag to the local ChromaDB database ? 2) local search also works in your method or only global search ?

vitaliiturchenko
Автор

Nice video Fahd - GraphRAG looks really good! I plan on trying it out tonight. The querying against it looks quite expensive though. I wonder if they have built in any caching approach with the query engine. I guess I better do some reading.

TheStuzenz
Автор

Hi @Fahad, This is simply excellent stuff. keep going

SaddamBinSyed
Автор

Hi great Video, i had a question if you already processed a PDF document for example however few weeks down the line you need to modify the pdf, how would you update the graph to make sure that the existing pdf is cleared and the updated one is the one that is being searched?

vinp
Автор

Excellent tutorial! I was wondering if you had a chance to work with the "graphrag-accelerator" Github project that Microsoft also put out. It says it can be used as an API that has all the GraphRAG functionality but in an API.

mikew
Автор

Hi Fahd …, 1. Where does graphrag store the vectors and graphs in? I.e on local machine… 2. how do we transfer the entire graphrag app from the local machine to into the cloud….once we are done with ingestion and testing

ibc--mediators
Автор

Thank you so much for this video! You are Awesome ❤

Ayush-tlny
Автор

Thank you!, I am curious about visualize the knowledge graph, how to visualize it?

framefact
Автор

perfect job. but when i try to use graphrag with ollama, error happened. logs.json shows: {"type": "error", "data": "Error Invoking LLM", "stack": "Traceback (most recent call last), and the index-engine.log INFO Error Invoking LLM

does anyone know how to fix this error??

Thinker-id
Автор

Great job! What if I want to add another document to the GraphRAG? Should I repeat the --init procedure or is there any other method? Great video, thank you.

TheMariolino
Автор

When run the code 'python3 -m graphing.index --root./rattiest', showers occurred during the pipeline run, See logs for more details.What to solve this problem?

zhengwu-jwfm
Автор

nice! Thanks for sharing.
in case we need to use local embedding model but not as a service (without Ollama), do we still need to pass the api_base?

aliyoussef
Автор

API key for Ollama should be "ollama". also, no need to do the embeddings locally because their cost is not high. The main objective should be to to do the LLM part with Ollama and then enquire both globally and locally.

aa-xnhc
Автор

Does this solution still works for anybody ?

chrishau
Автор

hey, i got it working, but it is giving out of context answers when I do local search, any idea what could be wrong?

AdityaSingh-inlr
Автор

Good tutorial. Thank you for sharing the code.

padhuLP
Автор

Thank you for the great video! I got an error that says that "No text files found in input" even though my input does have a clear *.txt file. Do you know what could be the problem?

lisag.