filmov
tv
Autoencoder Explained - Deep Neural Networks

Показать описание
#datascience #machinelearning #neuralnetworks
An autoencoder is a neural network that learns to copy its input to its output
An Autoencoder can be divided into two parts: the encoder and the decoder. The encoder is a mapping from the input space into a lower dimensional latent space (bottle neck layer).
At this stage it is nothing but low dimensional representations of data in an unsupervised manner
And what is does here is nothing but dimenstionality reduction similar to what PCA does
the potential of Autoencoders resides in their non-linearity, allowing the model to learn more powerful generalizations compared to PCA, and to reconstruct back the input with a significantly lower loss of information
The decoder is a mapping from the low dimension latent space into the reconstruction space with a dimensionality equal to the input space
The output in reconstruction space is close to similar to input but there is some loss of information this is called as reconstruction error
One potential use case of autoencoders is anomaly detection
This is more useful when we have very few negative cases and classes are imbalnced but it can be used in normal scenrio as well where labelling is hard
An autoencoder is a neural network that learns to copy its input to its output
An Autoencoder can be divided into two parts: the encoder and the decoder. The encoder is a mapping from the input space into a lower dimensional latent space (bottle neck layer).
At this stage it is nothing but low dimensional representations of data in an unsupervised manner
And what is does here is nothing but dimenstionality reduction similar to what PCA does
the potential of Autoencoders resides in their non-linearity, allowing the model to learn more powerful generalizations compared to PCA, and to reconstruct back the input with a significantly lower loss of information
The decoder is a mapping from the low dimension latent space into the reconstruction space with a dimensionality equal to the input space
The output in reconstruction space is close to similar to input but there is some loss of information this is called as reconstruction error
One potential use case of autoencoders is anomaly detection
This is more useful when we have very few negative cases and classes are imbalnced but it can be used in normal scenrio as well where labelling is hard
Комментарии