Building and Training an Autoencoder in Keras + TensorFlow + Python

preview_player
Показать описание
Join The Sound Of AI Slack community:

Learn how to build autoencoders with Python, Tensorflow, and Keras. In particular, in this video you’ll learn how to chain encoder + decoder architectures to create an autoencoder. You’ll also learn to train an autoencoder with the toy MNIST dataset.

Code:

===============================

Interested in hiring me as a consultant/freelancer?

Follow Valerio on Facebook:

Connect with Valerio on Linkedin:

Follow Valerio on Twitter:

===============================

Content
0:00 Intro
0:31 Build autoencoder
4:29 Update summary method
6:46 Build compile method
10:03 Build train method
13:04 Create the train script
13:54 The MNIST dataset
14:26 Training the autoencoder
25:01 Performing a train run
26:17 What's up next?
Рекомендации по теме
Комментарии
Автор

Congratulation for attraction and giving inspiration to first 10k great subscribers. The most important here, - -from my humble opinion. The channel promotes knowledge and inspired people around the world! This is the greatest value and has to be appreciated. 10k subscribers today but after the success in Open Research project, awesome community as a snow ball will capture millions. Have a nice day and thanks for awesome video.

markusbuchholz
Автор

I love this series, thanks so much for making it. I’d love some more information on how you tweaked that factor and what parameters would improve the results. Thank you so much!

alexijohansen
Автор

Great content as always! Are you planning to cover more advanced autoencoders that are popular for audio tasks? Such as Variational Autoencoder, Variation Autoencoder with GMM, and VQ-VAE ?

Jononor
Автор

Amazing stuff, can I modified AE/VAE to symbolic representation? I mean injecting notes from midi files for training and generate notes instead of spectrograms? Have you done a video about it ?

rubyrails
Автор

Thanks for the great series! I was wondering if the _model_input needs to be initialize at that point (after bottleneck). I am not very sure how that is working... (we are applying the encoder to the _model_input but at the same time the _model_input is defined inside the encoder). Maybe I am missing something... Thank you again!

musytech-musicaytecnologia
Автор

Hi Valerio, really appreciate your work on Audio+ DL! On the VAE, would you be having a video on training with audio (.wav) or spectrograms, beyond MNIST soon?

ganeshsuryanarayanan
Автор

hi valerio! Thank you so much for sharing with us your knowledge, it helped me a lot. I'm a beginner researcher in the field of signal processing mainly in image processing and I'm interested in Maching learning and I need your advice: I'm so lost how can I choose between kaggle or colab to run my autoencoder program like yours since I have a datasets of 10000 images (250, 250, 3). I urgently need your help. thanks

houdachakib
Автор

Thanks, it's helpful for me. It's really clear for a new beginner like me. I have a question: What are the best algorithms for extract feature from audio? Could you show me know? I used the mfcc and mel-spectrogram algorithms, but it's not good for classification audio.

linhnguyenuc
join shbcf.ru