Advanced RAG with Knowledge Graphs (Neo4J demo)

preview_player
Показать описание
I recently created a demo for some prospective clients of mine, demonstrating how to use Large Language Models (LLMs) together with graph databases like Neo4J.

The two have a lot of interesting interactions, namely that you can now create knowledge graphs easier than ever before, by having AI find the graph entities and relationships from your unstructured data, rather than having to do all that manually.

On top of that, graph databases also have some advantages for Retrieval Augmented Generation (RAG) applications compared to vector search, which is currently the prevailing approach to RAG.

▬▬▬▬▬▬ T I M E S T A M P S ▬▬▬▬▬▬

0:00 - Intro
2:16 - Demo starts
2:55 - Creating graph from unstructured data
4:23 - Chatting with the knowledge graph
5:55 - Advantages of Graphs vs Vector Search
Рекомендации по теме
Комментарии
Автор

Hey everybody, thanks for the great comments!

johannesjolkkonen
Автор

I love when data engineers making videos it's so easy to understand side look even the description is structured 👍

NLPprompter
Автор

Really neat demo! I think this works so well because graphs help LLMs approximate the sort of clear relationships humans have in their brain about the world.

itsdavidmora
Автор

I had to subscribe based on this idea alone! I'm trying to think of another way I could implement this with standard RAG for those that use LangChain/Flowise, and Mermaid code to hold the node information.

AssassinUK
Автор

This is a great video.

It clearly explained to me the difference between vector database and graph databases and the new features. We can build using the Graph Databases. Thank you.

SafetyLabsInc_ca
Автор

this is incredible! I see so many use cases opening up. thank you for sharing this!

w_chadly
Автор

Super nice food for thought. Thanks for sharing an alternative. Would love a deeper dive with some clear examples confirming the 3 advantages 😊 But might experiment myself for fun too!

alchemication
Автор

This concept & this video are truly amazing.
I have a specific idea how to apply this - i think this might change my whole project & I will explore this graph based approach!!
Great work - thank you.

agentDueDiligence
Автор

Gotta look at decentrlaized knowledge graphs. Those are the future of RAG databases.

AEVMU
Автор

Very nice demo. It showed why and how to use the graph database for RAG and answered questions that I came up with while watching.

jonathancooper
Автор

Great video! I wanted to explore the graph dbs exactly for this use case. Imagine also adding work pieces to this. Jiras, code reviews, comments, etc.

P.S. the music is great 😂

WisherTheKing
Автор

The video is great, thank you, but the background music made it difficult for me to focus :(

layanetaiwi
Автор

Fabulous video, thanks! Would be even better with no music, or at least if it was very much lower volume :)

Epistemophilos
Автор

Great demo on learning neo4j and LLM. In a typical RAG, vector database is created for the documents, how does it work for neo4j graph db?

chabo
Автор

That's exactly what I am looking for? Apart from the tutorials, are you also considering starting a discord channel where people can chat? I think there is growing interest in KG + LLMs but no where to dicuss

chenzhong
Автор

I would be curious about your view on when vector seach is better suited than graph search for RAG. Thanks for this great video! It helps a lot

inflationking
Автор

great content and delivery - love your work

michaeldoyle
Автор

Thanks for sharing! Ca you also share how are you dealing with consolidation of output nodes. Some project descriptions might generate "Graph Neural Nets" another "Graph Neural Network" or "GNN"

_jen_z_
Автор

Excellent job Johannes! After watching the video "Knowledge Graph Construction Demo from raw text using an LLM" by Neo4j, I came across your video and found that you addressed the crucially important question some of us are thinking about: "How can we improve the way we do RAG?" I agree with your assessment that using KGs provide very significant benefits that would compel us to want to use this approach vs using vector embeddings. However, am I correct in understanding that we need better workflows / pipelines to get all the kinds of data we need to work with into a KG to take more advantage of these benefits?

Sounds like you may have listened to Denny Vrandecic discusses The Future of Knowledge Graphs in a World of Large Language Models.

evetsnilrac
Автор

How can that generate useful relationship triples when you can only give small subsets of the data to the LLM at a time?

f