Graphs, Vectors and Machine Learning - Computerphile

preview_player
Показать описание
There's a lot of talk of image and text AI with large language models and image generators generating media (in both senses of the word) - but what about graphs? Dr David Kohan Marzagao specialises in Machine Learning for Graph-Structured Data and takes us through some simple examples.


This video was filmed and edited by Sean Riley.


Рекомендации по теме
Комментарии
Автор

If all computer science lectures were presented like this gentleman did in this video, all students would excel without a doubt. Great job on this video.

architech
Автор

In case anyone is a bit lost as to what's going on with the dot product. Basically, it's a way of comparing two vectors for similarity. So if you've got two identical vectors of length 1 - so both pointing the same way - the dot is 1. Meaning identical.

Turn one round 180 degrees and the dot product gives -1. If it's perpendicular you get zero. It's a lot like the cosine if you remember your trig.

Bit more to it if they're not unit vectors with a length of one. It basically gives you the ratio if one were projected onto the other. Imagine a clock face with the long hand at 12. The dot product gives you the amount the short had goes up the long hand as a ratio. So if you shine a light from the side and the shadow of one hand goes half way up the other; then the dot product is 0.5.

Hope that helps.

bottomhat
Автор

I like this. I'd watch more of this guy explaining graph kernels and beyond!

viktortodosijevic
Автор

Finally something I can understand!!
I hate other explanations either asking to ignore the math, or immediately dive into the code. As a beginner in computer science, I have no idea how different concepts get connected. This guy helps bridge that missing link in the knowledge for me!🙏🙏😆
Thank you so much!! Can he explain also the Kernel trick!

j
Автор

This guy just oozes passion for computer science!

adityavardhanjain
Автор

Really interesting. For all interested in those directions I recommend researching in the direction of graph neural networks and specifically for molecules topological neural networks!

danielwtf
Автор



All this leads into the SVD where the U matrix captures the eigen vectors of the AAT matrix(observations) and the V matrix captures the eigenvectors of the ATA matrix(features)

novelspace
Автор

Nice presentation. More content on graphs optimizations

kabuda
Автор

You can use graph convolutional networks to classify certain kinds of abuse by the users of your product. It's really flexible when you have sparse bits of information about the relationships between people and products. Graphs are absolutely everywhere.

wouldntyaliktono
Автор

I'd think the similarity between two objects is the % of overlapping arrangements. Identical objects would have 1 similarity, while objects that share a large pattern of elements would have about 0.5 similarity. The problem is that going through all those arrangements would get astronomically many right away.

Amonimus
Автор

So, it seems Dr David is my favorite now :)

odorlessflavorless
Автор

I think we need more videos on Graph theory.

YashGoswami-yi
Автор

so text language models use a mapping coordinate system to find what words are most likely near it.. it would work with the graph triangle demos easily

AuditorsUnited
Автор

A general computerphile comment: work is slowly being done to build Babbage's analytical engine. Maybe you can feature the project and give it some momentum!

mellertid
Автор

The "plots", distinguished from graphs, which Sean raises at 2:30, are those plots as in a Cartesian coordinate system, like a grid with x and y axes? I assume these algorithms don't apply to plots b/c coordinates are positional, ie, the distance, angle, orientation or other scalar properties of any given coordinate is fixed to the plane itself, and not just relative to other nodes in a graph. Is that more or less why?

JamesGaehring
Автор

13:52 It's Leman, not Lehman. Andrei Leman was Russian, not German. But probably half the papers in the literature make this mistake! His co-author, Boris Weisfeiler, at some point emigrated to the US and disappeared in Chile in mysterious circumstances during the Pinochet dictatorship.

beeble
Автор

@08:35, “We’re not going to go into the details of the kernel method…”.
Do you have a recommendation for someone who wants to go into the details? A reference maybe?

writerightmathnation
Автор

What happens when you use this technique with a distance of 3 and you start having to consider loops?

owenhoffend
Автор

Instead of three red and three blue nodes, what if we had three blue and one brown? 😁

bmitch
Автор

Is it not feasible to use an encoder-decoder model to build a low dimension embedding of an adjacency matrix, and then perform cosine similarity?

kabrol