Graph SAGE - Inductive Representation Learning on Large Graphs | GNN Paper Explained

preview_player
Показать описание
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬

In this video, I do a deep dive into the Graph SAGE paper!

The first paper that started pushing the usage of GNNs for super large graphs.

You'll learn about:
✔️All the nitty-gritty details behind Graph SAGE

▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬

⌚️ Timetable:
00:00 Intro
00:38 Problems with previous methods
04:30 High-level overview of the method
06:10 Some notes on the related work
07:13 Pseudo-code explanation
12:03 How do we train Graph SAGE?
15:40 Note on the neighborhood function
17:40 Aggregator functions
23:30 Results
28:00 Expressiveness of Graph SAGE
30:10 Mini-batch version
35:30 Problems with graph embedding methods (drift)
40:30 Comparison with GCN and GAT

▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
💰 BECOME A PATREON OF THE AI EPIPHANY ❤️

If these videos, GitHub projects, and blogs help you,
consider helping me out by supporting me on Patreon!

One-time donation:

Much love! ❤️
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬

💡 The AI Epiphany is a channel dedicated to simplifying the field of AI using creative visualizations and in general, a stronger focus on geometrical and visual intuition, rather than the algebraic and numerical "intuition".

▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬
👋 CONNECT WITH ME ON SOCIAL

👨‍👩‍👧‍👦 JOIN OUR DISCORD COMMUNITY:

📢 SUBSCRIBE TO MY MONTHLY AI NEWSLETTER:

💻 FOLLOW ME ON GITHUB FOR COOL PROJECTS:

📚 FOLLOW ME ON MEDIUM:
▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬

#graphsage #gnns #graphtheory
Рекомендации по теме
Комментарии
Автор

Any feedback is welcome - it'll help me make higher quality videos for you folks over the long run. Happy New Year! ❤

TheAIEpiphany
Автор

Really great explanation of a tough paper to understand by yourself

bhaveshjain
Автор

Very well explained. Even though GNNs are a new concept for me, the way you explained the background of GCN, GAT, SAGE and how Attention, LSTM are being used as aggregators, it was very easy to understand. Thank you!

kshitijgangwar
Автор

Lovely simple illustration and comparsion with other common GNN like GCN and GAT. Really helps me. Thank you.

akkkkiiiiii
Автор

Thanks for this series, it has been very instructive. I will be watching the rest of the GNN playlist!

rafael_l
Автор

Loving the series. I've been interested in program synthesis ever since the ARC challenge, and I wonder why there isn't more use of GNNs on modelling programs which are inherently graphical objects, though quite sparse. Natural language description --> code, or multiline code completion would be very cool applications.

DavenH
Автор

That was a helpful review I really appreciate it,

mozhgansadatmomtazdargahi
Автор

Hey great explanation 🎉! Looking forward.

arnabkumarpan
Автор

Thanks for the great explanation. I see that they are referring to two things whenever they say about the learning parameters (parameters of K aggregated functions and the set of weight matrices Wk's). I pretty much understand the learning of Wk's but didn't understand what they meant by the parameters of K aggregated functions and how are they learnt?

SwamySriharsha
Автор

Amazing video I love your content! Would love to connect and maybe talk a bit about CV + graphs for aesthetics recommendations.

daniel-mika
Автор

I was wondering is it possible to make video with how the code implement and get the experiment results in paper, would be helpful to learn those

UniverseGames
Автор

Request to prepare video on GraphSAINT: GRAPH SAMPLING BASED INDUCTIVE LEARNING METHOD"

mrugendrarahevar
Автор

Hi Aleksa, can you make a video about your background and your journey into AI/ML/DL?

SuperLOLABC
Автор

Hi, very helpful content. Can you please talk about how the negative samples are sampled? It's not explained in the paper and i find it very confusing.

menethilcaesar
Автор

for implementing lines 1-7. After k=2, B1 has nodes from the B2 and those nodes's random neighbors from B1 right? Now when k=1, we have to find random neighbors of nodes in B1 and do union. So for this step do we find neighbors of nodes in B2 also again or only of the nodes which were originally in B1? Please help me to understand

utkarshkathuria
Автор

Hey, great explanation! I had a question. So the way GraphSage trains, there is no way for embeddings to learn(or even be exposed to) graph structure/nodes beyond K hops right? Isn't this a shortcoming, because you might have a huge graph and maybe important structural information to learn but the formulation only allows it to see up till k hops only.

rutvikreddy
Автор

Great explanation, Thank you so much, could you please tell me, I am really confused, in GraphSage, Do we take sample of nodes from entire graph to implement the algorithm1 or all nodes of the graph? and what difference between GraphSage and GCN as you know in GCN also embedding of any node is aggregate of embeddings on its neighborhood . I think in the graphSage algorithm also the embedding of any node is aggregate of its neighborhood . Could you tell me what is the main difference?

mohamadabdulkarem
Автор

Can anyone help me by telling me how I can use this in finding shortest path after finding node or graph classification??

GauravSingh-yxmw
Автор

Hey Aleksa,
The math is a bit hairy so going over it would be great! They say that their model generalizes CNN architectures to graphs and manifolds which is really important for feature learning in these domains.

essy