Variational Autoencoder from scratch || VAE tutorial || Developers Hutt

preview_player
Показать описание
Do not directly learn from inputs, try to learn from its distribution so that you can keep track of what you're actually learning.
That is the motive behind variational Variational Autoencoders.

In this video, you'll learn how a Variational Autoencoder works and how you can make one from scratch on a dataset of your choice by using Tensorflow and Keras.
I hope you liked it.
If you have any query regarding this, kindly please comment down below and if you don't have one please try to leave your feedback, it means a lot to me.

And as always, Thanks for watching.

Download dataset from here:

I'm available for your queries, ask me at:
Рекомендации по теме
Комментарии
Автор

How can you backpropagate through the mean or std? Those are not parameters, they are just qualities of the input.

justinlloyd
Автор

Amazing I have one doubt regarding my use case. I want to augment data for the accelerometer sensor and then feed it to CNN. Can we discuss about it?

yashpisat
Автор

the link of the dataset is not working ...!

amangarg
Автор

This is really good explanation and simple code. Thank you! I didn't understand why divide by 64*64 when adding kl loss to loss?

saikeerthi
Автор

i watched many videos to understand the loss for vae. so far this is the best explanation i found. thank you

BarryAllen-lhjg
Автор

Hey, Can I use Variational autoencoder to mutate a sentence i.e. We give it a sentence and it give a different sentence with same meaning ?

sushilkrsoni
Автор

Can you explain with code on how to load up a Saved Pre Trained model so I can use it to generate some images and Train it further ?

MTalha
Автор

can you please explain in few words why you divided the kl loss by 64*64 : "return reconstruction_loss(y_true, y_pred) + (1 / (64*64)) * kl_loss(mu, log_var)" when summing up the total loss, was it to scale the kl loss according to image size as the dataset you used included images of shape 64*64 ? !!! Waiting For Your Answer. Thank You, Great Tutorial.

ashishbhong
Автор

Failing in sample = [image for image in sample]

InvalidArgumentError: Input is empty.
[[{{node DecodeJpeg}}]] [Op:IteratorGetNext]
Thanks for your help!

luisherrera
Автор

Hello sir why I got len(training_dataset) 8 instead of 448. 😢

Thiha_Music
Автор

can you make a video on experimenting with data for example the colour rising the images with v.auto encoder

navinbondade
Автор

libraries are outdated, kindly update the code pls, nice project though

voyager
Автор

Pretty great tutorial. I really liked the first half where you visualized the VAE using the animations. I would suggest continuing on with the theory of the code instead of actually showing the code (Rather, put a link to the code in the description). Instead it would be more helpful if you showed a flow chart or pseudo code along with an explanation of said graph or line-by-line. That way we can follow along but not feel as if we are being spoon fed the answers. That being said, great explanation

bryangass
Автор

Can make video on Convolutional plus LSTM auto-encoder and a residual network auto-encoder

navinbondade
Автор

Excuse me, I am trying to save the images one by one, however you save the images in groups, how can I save them one by one?

eneko
Автор

Very clear explanation and code. Thanks so much for this great content!!

facundopedemonte
Автор

Such a wonderful explanation! Thank you so much!

ОПривет-ъъ
Автор

Very nice explanation...I have forwarded this link to many friends who are working with autoencoder..one request..can you explain speaker recognition using autoencoder...

arundhatimehendale
Автор

hello mate, can I have your dataset? the link you provided does not work thanks

YaseenKhan-xntw
Автор

This is an extremely underrated video. I've looked at at least a dozen efforts to explain this concept. This is the only one that goes straight to a clear explanation, ignoring the heavy math that only gets in the way, and produces a very beautiful working code example. I do hope you continue to produce this kind of content. And I hope that it gets the recognition it deserves.

pi