An autoencoder is an unsupervised learning technique for neural networks that learns efficient encodings of the input by training the network to remove the redundancy in the input data. An autoencoder is a special case of an [[encoder-decoder model]] where, typically, the input and output domains are the same. ![[autoencoder-representation.png]] Source: [v7labs](https://www.v7labs.com/blog/autoencoders-guide) An autoencoder is composed of an encoder, a [[latent space]] (or bottleneck) and a decoder: ![[autoencoder-components.png]] Source: [Deepak Birla (medium)](https://medium.com/@birla.deepak26/autoencoders-76bb49ae6a8f) The autoencoder was first proposed as a nonlinear generalization of [principal components analysis](https://en.wikipedia.org/wiki/Principal_component_analysis "Principal component analysis") (PCA) by Kramer (1991). ## Reference Kramer, Mark A. ‘Nonlinear Principal Component Analysis Using Autoassociative Neural Networks’. _AIChE Journal_ 37, no. 2 (1991): 233–43. [https://doi.org/10.1002/aic.690370209](https://doi.org/10.1002/aic.690370209).