Autoencoders is an unsupervised neural network that learns how to a) efficiently compress and encode original data and b) learns how to reconstruct that compressed data back to a representation that is as close to the original data as possible.
4 components of autoencoders
- Reconstruction Loss
In short, input data is passed into the encoder, where the model learns how to reduce the dimensionality and produce a compressed representation of the input data. Bottleneck layer contains the compressed representation of the input data. The role of the decoder is to learn how to reconstruct the data from the compressed representation. Finally, we have the reconstruction loss which measures how well the decoder has performed in reconstructing the compressed representation to the original data.
The goal is to minimise the reconstruction loss (using backpropagation).
Why compressed and then reconstruct original input?
The goal is to train an autoencoder to compress the original data accurately and subsequently build a smaller dimensioned latent representation by learning to capture most of the useful properties of the original data. By enforcing the latent representation to have a smaller dimensions than the original data, the model is forced to learn the most important features of the training data (known as undercomplete).
For general use case of autoencoders, please check out the medium articles below. It covers how autoencoders are used in anomaly detection and image denoising! Autoencoders can also be use for dimensionality reduction.
4 types of autoencoders
The simplest vanilla autoencoder is a three layer neural network with one hidden layer. Of course, in this case, the hidden layer has smaller dimension than the input and output layers. We can extend this architecture to have multiple hidden layers (multilayer autoencoders). As the name suggested, convolutional autoencoders uses convolution layers rather than fully-connected layers. There are two types of regularised autoencoder: sparse and denoising.