PyTorch Geometric tutorial: Graph Autoencoders & Variational Graph Autoencoders

preview_player
Показать описание
In this tutorial, we present Graph Autoencoders and Variational Graph Autoencoders from the paper:

Later, we show an example taken from the official PyTorch geometric repository:

Download the slides and the Jupyter-notebook from the official web site:
Рекомендации по теме
Комментарии
Автор

This tutorial doesn't do latent space visualization, very important aspect of gauging if VAEs are being trained correctly. Could you share a video which does this too?

vlogsofanundergrad
Автор

Hello everybody. First of all, thanks for this particular lecture and the whole course.
I have one inquiry. Is it possible to produce an embedding not only related to node features but also to graph structure? I mean, if we move to page number 33 in the slide, we have a graph with 3 nodes and several node features per node and then it is reduced to 3 nodes and 2 features per node. So, for example, could it be possible to condense de graph to 2 nodes with 2 node features per node and then decode it to the original graph?
Thanks in advance.

JJabn
Автор

Thanks for the tutorial, I don't get one thing on the test of VGAE, why do we encode on the training and not on the test set?

Ripper.
Автор

Hi I think there is a mistake in the formulation of the variational autoencoder reconstruction loss.

sarash
Автор

I suppose this is a very simple question. I'm sorry about that. The GCN receives a Graph as an input and a vector as an output. Usually, the length of this output vector is the number of nodes of the graph. Then, how do you can concatenate two GCNs? The input of the second one is going to be a vector instead of a graph. Thank you for you video.

francescserratosa
Автор

Hello, thank you for the video, it is very interesting .I want to ask you a question, because i have found one other example and it seems to me more correct. Why in the test function you compute the encoding of train data and then you compute the loss, auc and ap considering encoding of test_edges?

This is what i mean:
in your code

def test(pos_edge_index, neg_edge_index):
model.eval()
with torch.no_grad():
z = model.encode(x, train_pos_edge_index)
return model.test(z, pos_edge_index, neg_edge_index)

###
def test(pos_edge_index, neg_edge_index):
model.eval()
with torch.no_grad():
z = model.encode(x, test_pos_edge_index)
return model.test(z, pos_edge_index, neg_edge_index)

nicolacalabrese
Автор

Beautiful lecture and tutorial,

In Slide 59, is it q(z\x) we do not know or p(z|x). Can you confirm

EverythingDatawithHafeezJimoh
Автор

Can It be applied for multiple graphs instead of a single graph? What will be pos_edge_index index and neg_edge_index?. It will be helpful a lot if you can provide some pointers.

yumlembamrahul
Автор

how to create a graph for custom dataset that will work for autoencoders

padisarala
Автор

Thanks for the lecture. If we have text data without labels, can we convert it into a graph for this?

swatityagi
Автор

Grazie mille per questa serie fantastica sul ML applicato ai grafi. Ho un paio di domande:
- I termini W0 e W1, presenti nei GCN, sono fattori che vengono inizializzati randomicamente e che vengono ottimizzati durante il learning process?

- Per quanto riguarda il codice che ha implementato, non mi è chiara la struttura del dataset che è stato preso in considerazione: data.x è la matrice delle nodal features, mentre data.edge_index che cos'è esattamente? Ha a che fare con l'adjacency matrix?

-Perché splittiamo il dataset in train e testset, se stiamo considerando un apprendimento non supervisionato?

- Supponendo di avere un intero dataset di matrici sparse (soltanto 1 e 0), e di volerla quindi considerare come la adjacency matrix del mio grafico, come posso creare il tipo di dato che serve come input per il variational autoencoder descritto (e implementato) in questo video?

carloamodeo
Автор

Thanks a lot for a very nice tutorial. I have a quick question. At 33.52 for learning mu and sigma, you said shared learning parameter W_1, however in VariationalGCNEncoder, self.conv_mu and self.conv_logstd were constructed using GCNConv class. So, they do not share the same parameters.

anowarulkabir
Автор

Thanks for the lecture! I have one question:

Lets say you have a dataframe with columns that can be seen as nodes and there is a link(edge) between the columns based on domain knowledge. How do you convert this into an input that can be used for GVAE?

nateriver
Автор

Thanks for the clear explanation!

Just one question: Normally, we used VAE (or here VGAE) as the generative model. But it seems like in your examples, we cannot use the decoder individually, right? How can we call this "InnerProductDecoder"? Something like VGAE.decoder?

chongtang
welcome to shbcf.ru