Autoencoder

Last revised by Edgar Lorente on 4 May 2021

Autoencoders are an unsupervised learning technique in which artificial neural networks are used to learn to produce a compressed representation of the input data.

Essentially, autoencoding is a data compression algorithm where the compression and decompression functions are learned automatically from examples via the use of neural networks.

The main components of an autoencoder are:

  • encoder: through which the model learns how to reduce the input dimensions and compress the input data into an encoded representation
  • bottleneck: which is the layer that contains the compressed representation of the input data; this is the lowest possible dimensions of the input data
  • decoder: through which the model learns how to reconstruct the data from the encoded representation to be as close to the original input as possible
  • reconstruction loss: this is the loss function that measures how well the decoder is performing and how close the output is to the original input

The key characteristic of autoencoders’ architecture is a bottleneck which limits the amount of information that can flow through the network, forcing a learned compression of the input data. The simplest way for implementing a bottleneck is to constrain the number of nodes present in the hidden layers of the network. By then penalizing the network according to the reconstruction error, the model can learn the most important/representative attributes of the input data and how to best reconstruct the original.

Autoencoders generally are data-specific and lossy. "Data-specific" means that they are only able to compress data similar to what they have been trained on. For example, an autoencoder trained on photos of faces would do a rather poor job of compressing photos of flowers, because the features it would learn would be face-specific 4. "Lossy" means that the decompressed outputs will be degraded compared to the original inputs; this differs from lossless compression.

There are several different types of autoencoders 6:

  • denoising autoencoders
  • sparse autoencoders
  • deep autoencoders
  • contractive autoencoders
  • undercomplete autoencoders
  • convolutional autoencoders
  • variational autoencoders

Autoencoders in radiology

Uses of autoencoders relevant to radiological AI include noise reduction and anomaly detection 2.

History and etymology

The idea of autoencoders has been in the neural network literature for decades (LeCun, 1987; Bourlard and Kamp, 1988; Hinton and Zemel,1994) 3. Traditionally autoencoders were used for dimensionality reduction or feature learning, in other words, to reduce the complexity of data and reveal its internal structure.

 

ADVERTISEMENT: Supporters see fewer/no ads

Updating… Please wait.

 Unable to process the form. Check for errors and try again.

 Thank you for updating your details.