Understanding Variational Autoencoders (VAEs) | Deep Learning

preview_player
Показать описание
Here we delve into the core concepts behind the Variational Autoencoder (VAE), a widely used representation learning technique that uncovers the hidden factors of variation throughout a dataset.

Timestamps
--------------------
Introduction 0:00
Latent variables 01:53
Intractability of the marginal likelihood 05:08
Bayes' rule 06:35
Variational inference 09:01
KL divergence and ELBO 10:14
ELBO via Jensen's inequality 12:06
Maximizing the ELBO 12:57
Analyzing the ELBO gradient 14:34
Reparameterization trick 15:55
KL divergence of Gaussians 17:40
Estimating the log-likelihood 19:04
Computing the log-likelihood 19:58
The Gaussian case 20:17
The Bernoulli case 21:56
VAE architecture 23:33
Regularizing the latent space 25:37
Balance of losses 28:00

Useful links
------------------------
Рекомендации по теме
Комментарии
Автор

My favorite video on VAEs, the derivation of the ELBO is much clearer than in other resources I've found online. Awesome resource.

BigBeniir
Автор

one of the best explanations of VAE..👌

sathyanarayanan
Автор

Best video on VAEs I found on Youtube :D Thanks a lot!

emmyzhou
Автор

Thanks for the clear step-by-step explanation!

KrizTahimic
Автор

4:42 also at 18:10 - Small correction: Not N(0, I), but N (0, 1). This is a multivariate unit Gaussian: Center at the 0-vector and unit standard variation of 1, not capital I.

MooseOnEarth
Автор

By far the best resource I've found on VAEs, after _lots_ of reading and video watching. This puts it all together intelligently and clearly. Thank you!!

Chachaboyz
Автор

I absolutely love the way you presented a visual illustration on how the reconstruction and KL-Divergence loss terms affect the latent space.

KianSartipzadeh
Автор

This video is amazing! I like how you get "reparametrization trick" into the picture, that you first calculate the gradient seperately to show the potential issue. Super clear!

ligezhang
Автор

Very neat, I will look forward to more of your content.

rafa_br
Автор

NOTES:

- Slight typo at 15:20. Where it says "ELBO" within the integral, it's just the difference of logs, whereas the ELBO is actually the expectation of difference of logs over q(z|x).
- At 20:36 (the Gaussian case), I've phrased it in univariate terms (even though the Gaussian is generally multivariate); however, we would still recover the result that what we're calculating is a Euclidean distance between x and mu, given that our covariance matrix is an identity matrix.

deepbean
Автор

Thank you! You did such a great job explaining this! I've finally understood how all these terms and concepts come together. <3

elenamacedo
Автор

You put everything together working in the video, that's really helpful. Thank you!

nikiiliev
Автор

Oh I'm so glad I found this video

radifire
Автор

Nice, will revisit this because the maths is overwhelming right now

rishidixit
Автор

Thank you so much for this, really cleared up how VAEs work

bradleymorris
Автор

Thank you very much for your work, it has been decisive for me to understand the basis of this type of models and it will surely be of great help for me to understand the functioning of others.

carlosmontalban
Автор

At 8:46 why is joint probability tractable? Why are others not tractable?

AbhayShuklaSilpara
Автор

Hello, I can't understand the step of instant 15:20. You expand the expectation, you bring the gradient int the integral, but, why do you substitute the difference of logs by the ELBO?? The ELBO is the expectation of the difference of logs over q, isn't it?

carlosmontalban
Автор

In 13:25 it was stated that maximizing ELBO also maximizes evidence and minimizes the KL divergence. However, you did not prove/show how can be this happen. Actually, I thought the evidence was a constant because when we gather a dataset of {x1, x2, ... xn}, p_theta(xi) is constant (when theta is constant)

eneserdogan
Автор

From where to study Probability for Deep Learning ? Specifically the one used in here? I have studied probability but not this much. If anyone can give resources it will be very helpful

rishidixit
welcome to shbcf.ru