In Neural Net's tutorial we saw that the network tries to predict the correct label corresponding to the input data. We saw that for MNIST dataset (which is a dataset of handwritten digits) we tried to predict the correct digit in the image. This type of machine learning algorithm is called supervised learning, simply because we are using labels.

Autoencoder is neural networks that tries to reconstruct the input data. Since in training an Autoencoder there are no labels involved, we have an unsupervised learning method. By encoding the input data to a new space (which we usually call _latent space) we will have a new representation of the data. Two general types of Autoencoders exist depending on the dimensionality of the latent space:

  1. dim(latent space) > dim(input space): This type of Autoencoder is famous as sparse autoencoder. By having a large number of hidden units, autoencoder will learn a usefull sparse representation of the data.

  2. dim(latent space) < dim(input space): This type of Autoencoder has applications in Dimensionality reduction, denoising and learning the distribution of the data. In this way the new representation (latent space) contains more essential information of the data

Autoencoder also helps us to understand how the neural networks work. We can visualize what a node has been experted on. This will give us an intuitive about the way these networks perform.

In this tutorial we will implement:

  1. Denoising autoencoder (noiseRemoval):
  2. Visualizing activation of nodes in hidden layer (visActivation)
© 2018 Easy-TensorFlow team. All Rights Reserved.