Autoencoders - Ep. 10 (Deep Learning SIMPLIFIED)

preview_player
Показать описание
Autoencoders are a family of neural nets that are well suited for unsupervised learning, a method for detecting inherent patterns in a data set. These nets can also be used to label the resulting patterns.

Deep Learning TV on

Essentially, autoencoders reconstruct a data set and, in the process, figure out its inherent structure and extract its important features. An RBM is a type of autoencoder that we have previously discussed, but there are several others.

Autoencoders are typically shallow nets, the most common of which have one input layer, one hidden layer, and one output layer. Some nets, like the RBM, have only two layers instead of three. Input signals are encoded along the path to the hidden layer, and these same signals are decoded along the path to the output layer. Like the RBM, the autoencoder can be thought of as a 2-way translator.

Autoencoders are trained with backpropgation and a new concept known as loss. Loss measures the amount of information about the input that was lost through the encoding-decoding process. The lower the loss value, the stronger the net.

Some autoencoders have a very deep structure, with an equal number of layers for both encoding and decoding. A key application for deep autoencoders is dimensionality reduction. For example, these nets can transform a 256x256 pixel image into a representation with only 30 numbers. The image can then be reconstructed with the appropriate weights and bias; as an addition, some nets also add random noise at this stage in order to enhance the robustness of the discovered patterns. The reconstructed image wouldn’t be perfect, but the result would be a decent approximation depending on the strength of the net. The purpose of this compression is to the reduce the input size on a set of data before feeding it to a deep classifier. Smaller inputs lead to large computational speedups, so this preprocessing step is worth the effort.

Have you ever used an autoencoder to reduce the dimensionality of your data? Please comment and share your experiences.

Deep autoencoders are much more powerful than their predecessor, principal component analysis. In the video, you'll see the comparison of two letter codes associated with news stories of different topics. Among the two models, you’ll find the deep autoencoder to be far superior.

Credits
Nickey Pickorita (YouTube art) -
Isabel Descutner (Voice) -
Dan Partynski (Copy Editing) -
Marek Scibior (Prezi creator) -
Jagannath Rajagopal (Creator, Producer and Director) -
Рекомендации по теме
Комментарии
Автор

The next video is on Recursive Neural Tensor Nets or RNTNs and it will come out tomorrow!

DeepLearningTV
Автор

If your goal is unsupervised learning and/or feature extraction, then consider using an autoencoder. Enjoy :-)

DeepLearningTV
Автор

Great set of videos, I am really enjoying them. Hope that they get a bit hands on, where we can try out things that we have learnt on our own problems.

ShajeeRafi
Автор

No, I have never needed to use autoencoders to reduce dimensionality of data. But I would definitely use it to generate (reconstruct) dialogs. It is similar to what humans do with empathetic listening trying to understand somebody else. Perfect algorithm for self learning chatting bot!

kkochubey
Автор

Thanks for this wonderful lectures, I had a question:
While using the Auto encoders, as mentioned in the video, we are reducing a 780 pixels input in to, say 30 numbers which accurately approximates the large input.. But the point which I don't understand is why it is required to again decode it to original Input/image? Since our purpose is either classification or regression..??

nutito
Автор

In EP.5, she says the vanishing gradient is a serious problem for deep learning neural network training task, and until 2006, Hinton, Lecun and Bengio have done some breakthrough work to conquer this vanishing gradient problem. But in the following videos, she introduced the CNN, RBM, DBN, autoencoder, does it mean we use these models to deal with the vanishing gradient problem? She didn't mention this conclusion in the video clearly. Is my understanding right?

zhongzhongclock
Автор

These video lectures are useful. The effort is commendable. I have a question about PCA. Can we reconstruct the data after reducing dimensions( By applying the PCA) ? If yes then how, please.

saeedahmed
Автор

I've used Discrete Cosine Transform to perform lossless compression of images and then re-construct them. Can I say that I used an Auto encoder?

abdurrehman
Автор

I am looking forward to reduce the processing time of stacked autoencoder. Can you provide more information precise and specifics on stacked auto encoder please

dr.kiranravulakollu
Автор

Belongs Autoencoders to the Feedforward Nets family?

gast
Автор

It looks like the animation in the video is out of sync with the voice over. It seems to stay on the RBM for a very long time, while mentioning other stuff that sounds like it should be in an animation. Otherwise though, very good stuff!

robertcrowe
Автор

Do you have the sources for the image comparing PCA and Autoencoders at 3:12 ?
Thanks

RodolpheLampe
Автор

Content was lifted from a Google Talks

roberth
Автор

Do you have the sources for the image comparing PCA and Autoencoders at 3:12 ?
Thanks

RodolpheLampe