**Normalizing flows** were proposed to solve many of the current issues with [[generative adversarial network (GAN)]]s and [[variational autoencoder (VAE)]]s by using reversible functions.
Normalizing flows are a class of generative models that allow for flexible modeling of complex probability distributions.
The key idea behind normalizing flows is to model the target distribution by transforming a simple distribution through a sequence of invertible mappings. Each transformation consists of a deterministic mapping from the input space to the output space, as well as a corresponding inverse mapping. The invertibility ensures that both the forward and inverse transformations can be easily computed.
By composing multiple such transformations, the resulting model becomes capable of capturing complex distributions with intricate structures. Normalizing flows are particularly effective at modeling high-dimensional data with complex dependencies, such as images, audio, or text.
![[normalizing-flows.png]]
Source: [towardsdatascience](https://towardsdatascience.com/introduction-to-normalizing-flows-d002af262a4b)