178 - An introduction to variational autoencoders (VAE)

preview_player
Показать описание
Code generated in the video can be downloaded from here:
Рекомендации по теме
Комментарии
Автор

As Einstein said “If you can't explain it to a six-year-old, you don't understand it yourself.”
Very easy to understand thank you.

cankoban
Автор

Thank you for such a no math video. It's very rare to find videos with clear explanation for the intuition of the problem. Once we grab the idea, then the math seems more manageable. Thank you so much!

musicbytea
Автор

dude your channel is a gold mine, keep up the great work

roshanpatel
Автор

Incredible. Can't wait for future videos. Big fan as always.

matthewmiller
Автор

I have never heard a better explanation than this.

mdyounusahamed
Автор

How can you explain this topic so elegantly and clearly. Thank you

HandokoSupeno
Автор

Ohh finally I'm so glad you made tutorial on AAE please cover all aspects of AAE in image processing. Thanks so much you're best youtuber for image processing and deep learning. I'm your biggest fan just small request please take a bit more time in explaining code as I'm a biologist but interested in deep learning and image analysis. Thanks once again.

samarafroz
Автор

such a great explanation! Thank you so much :)

devanshsharma
Автор

Thank you for the great intuitive explanation!

linhbui-vtyz
Автор

Was working my way through MIT Deep Learning Generative models 2024 and was stuck on the introduction of Epsilon for the Loss calculation, your instruction helped clarify many things, however, still trying to get my head around all this.

veganath
Автор

awesome stuff! i like the hesitant pause at “backpropgation” - oh i guess u know it if u r watching this hahaha

friendlydroid
Автор

Awsome! Really very useful explanation!

zilaleizaldin
Автор

Thank you for your great video! I saw a lot of notes which introduced too much about mathematical parts but ignored to tell why and how we need to use VAE. Your video helps me to understand why we need to learn a desirable distribution of the latent vector.

chenmingyang
Автор

Thank you! Looking forward to the application!

onurserce
Автор

Thank you for the great intuitive explanation! was looking for a video of this kind!

sivanschwartz
Автор

How do I learn what number to set the artificial neurons in each layer? i'm super confused.

michaelmoore
Автор

Very informative video Sir. Thank you very much.

RENJIS-oncp
Автор

REALY GREAT VIDEO
IS THERE CODE FOR OTHER DATASET LIKE MRI IMAGES ?

momfadda
Автор

This is all great, I think my one quibble is that you are perhaps using a slightly nonstandard definition of "generative". Usually it means that we are modelling the distribution of the input space, and can therefore sample ("generate") new realistic inputs. For exactly the reasons you state, standard autoencoders don't do this, and therefore by definition are not generative models. Yes they can "generate" things but those things don't represent the input space and will probably be a "meaningless" mess. Whereas with variational autoencoders, they do model the input space and can therefore generate "realistic" inputs, so they are generative models.

gorgolyt
Автор

QUESTION CONCERNING VAE! Using VAE with images, we currently start by compressing an image into the latent space and reconstructing from the latent space.

QUESTION: What if we start with the photo of adult human, say a man or woman 25 years old (young adult) and we rebuild to an image of the same person but at a younger age, say man/woman at 14 years old (mid-teen). Do you see where I'm going with this? Can we create a VAE to make the face younger from 25 years (young adult) to 14 years (mid-teen)?

In more general term, can VAE be used with non-identity function?

cptechno