How to Implement Autoencoders in Python and Keras || The Decoder

preview_player
Показать описание
Join The Sound Of AI Slack community:

Learn how to build autoencoders with Python, Tensorflow, and Keras. In particular, in this video you’ll learn how to build the decoder component of an autoencoder.

Code:

===============================

Interested in hiring me as a consultant/freelancer?

Follow Valerio on Facebook:

Connect with Valerio on Linkedin:

Follow Valerio on Twitter:

===============================

Content
0:00 Intro
0:44 Build method update
1:35 Build decoder
4:58 Add decoder input
5:46 Add dense layer
10:06 Add reshape layer
11:57 Add convolutional transpose layers
23:37 Add output layer
29:00 Build decoder recap
30:22 Summary method update
30:57 Autoencoder instantiation + architecture summary
34:32 What's up next?
Рекомендации по теме
Комментарии
Автор

Best explanation I've ever seen, Thank you Valerio, really appreciate it.

mohamedalansary
Автор

Best explanation I've seen of the encoder yet!

garyhuntress
Автор

Thanks! It's useful for me in this area. Hopefully, you can make a lot of videos.

linhnguyenuc
Автор

Thank you for the nice video, very useful. The training of autoencoders is very challenging if one wants to get better efficiency compared to PCA. I wonder if you are going to cover it in the next sessions, or you can kindly introduce a source to learn about it.

mostafahasanian
Автор

great work sir
but why we don't put relu and bn with with the last conv2d transpose and after the dense layer in decoder

EngRiadAlmadani
Автор

Was Eagerly waiting... Thankyou Valerio

sandipandhar
Автор

H, Valerio, thanks for your videos, I really appreciate it. But I have a question, what is 'Auto' means in autoEncoder?

jiwenlu
Автор

thank you for the good video. I think the encoder and decoder are not completely mirror. if you see Total params, you can see the difference between them in encoder and decoder.
thank you 🌹

mohamadvahabian
Автор

If I wanted to get out the latent representation and store this, what's the best way to do that?

jamiepond
Автор

Are we going to use a custom training loop to train the autoencoder model since we are not inheriting it from the Model subclass?

Saitomar
Автор

Valerio, In the function:
def _add_conv_transpose_layers(self, x):
"""Add conv transpose blocks."""
# loop through all the conv layers in reverse order and stop at the
# first layer
for layer_index in
x = self._add_conv_transpose_layer(layer_index, x)
return x
Shouldn't the for loop use this range? Since the filter number should be of the OUTPUT layer of the conv_transpose -- not of the current layer?

amitresh
Автор

Great video but why do you say "Luckily" for the output shape being equal to the input shape? Is it possible in TF that, by carelessly setting the filters, strides, etc params, the model output shape will differ from the input size?


I translated this model to Torch and I learned that my output is 25 by 25 instead of 28 by 28. Possibly it's because of the differences between the Torch and TF formulas for calculating the output height and width of a conv2d transpose. However, I played around with different filter and kernel sizes and managed to get equal sizes.

FlREDEATH
Автор

Will you include how to input extracted features of audio data to the auto encoders and rebuilt the generated samples from the auto encoders ?

sandipandhar
Автор

Hey man! What would it take to sit down with you on Skype and go over tensorflow? I've been doing tons of reading and watching videos but it would be always to talk to someone in person whose just as passionate

Josh-ujgb
Автор

Sir, you are doing great job. I really enjoy a lot. Are you planning to start a new series for a speech to text engine using Pytorch?

berkaycinci
Автор

First of all, thanks for the really nice tutorial. Second, what is the function of the dense layer in the decoder?

noelgomariz
Автор

Great video, but why so much OOP when the whole class could've contained around 30 % of the methods and would've worked just fine.

kamenenator
Автор

please complete autoencoder playlist, and also cover more topic of dl, you explains very well.(`_')!

Aadyagupta
Автор

The link "Join The Sound Of AI Slack community:" in the description instead goes to a github file....

JosephWeidinger
Автор

I don't understand why your architecture has 2 dense bottleneck layers - one for the encoder and one for the decoder.
From my understanding that's not how it works.

antoineberkani
visit shbcf.ru