Graph ML: Build Knowledge Graphs using Generative AI and LLMs

preview_player
Показать описание
Knowledge Graphs
Generative AI,
LLMs
Graph DBs
Neo4j
Cypher
Fine Tune LLMs
Graph ML
Node Embeddings
Graph Features
Langchain
Chatbots
Gradio

#datascience #machinelearning #deeplearning #datanalytics #predictiveanalytics
#artificialintelligence #generativeai #largelanguagemodels #naturallanguageprocessing
#computervision #transformers #embedding #graphml #graphdatascience
#datavisualization #businessintelligence #montecarlosimulation #simulation #optimization
#python #aws #azure #gcp
Рекомендации по теме
Комментарии
Автор

If you found this content useful, pleases consider sharing it with others who might benefit. Your support is greatly appreciated :)

SridharKumarKannam
Автор

This series is exactly what I was looking for! Thank you 🙌🏾

andydataguy
Автор

Thanks for sharing this use case is something i have been looking at for a while now.

mulderbm
Автор

Great, very helpful and detailed step by step instructions, thanks a lot, really appreciate!

pavellegkodymov
Автор

thankyou for the help sir...i had a wanted to ask you that i'm working on a similar project where my input is unstructured contract files like Non disclosure agreements, loan agreements, etc ... how am i suppose to create a input file for the same ...

mindgraphai
Автор

Will KG's be the basis for the semantic web they say is coming❓
= An embeding representation would be a very large download with so many vectors per token❗
= A token representation where each token is a number, would be more compressed than plain text, but not very useful since the calculation is so easy❗
= But a Knowledge Graph (KG) would be the best❗ A Tokenized KG would be more compressed. (If we can standardize on the best Tokens to numbers standard) 🔥

ScottzPlaylists
Автор

Very good hands on! Thank you. Three questions:
1. Why run `clean_text()` inside the for loop for every prompt? Why not run it once only before the for loop?
2. Why have the LLM generate the unique ID? Wouldn't it be better to have a proper UUID generator for every entity instance (person, skill, company, etc.) and inject into the hydrated template or replace it ex post from the LLM's response?
3. If the LLM is creating the "unique" IDs (e.g. `skill1`, `skill2`, etc.), how do you normalize ID's across multiple resume's? Wouldn't you risk having some id collisions across resumes? e.g. resume_1 -> `{"skill_label": "SQL", "skill_id": "skill1"}` vs resume_2 -> `{"skill_label": "Mongo", "skill_id": "skill1"}`?
Thanks again!

leobeeson
Автор

Thank you for your efforts. I have a question: How can we extract entities from PDF files?

imanechatoui
Автор

Have u tried your prompt over openai and see if vertexai is better? What makes you to go after few prompt approach vs simply specify your entity relations as context and have the LLM create the full graph for you in one shot instead?

rayhon
Автор

can we do the same with Azure open AI?

amruth
Автор

how can we make use of any other model, how about openai?

Shivam-biuo
Автор

Sir, Can't we use any other open source llm model instead of text-bison?

kevinkate
Автор

THank you so much. Do I have to pay for the google's model? Is there any way that I can use for free?

romakhajiev
Автор

Please add reference for notebook code

vishaldesai