Neural Networks approximate any function

The Universal Function Approximation theorem states that Neural Networks of increasing size and depth can approximate any function in the world. This is amazing.

Multilayer perceptrons (MLP) are the fundamental buildings blocks of a NN. NNs add MLPs side-by-side to create layers, and add more layers to create depth.

Activation Functions

Loss Functions

Variations

Autoencoder

Generative Adversarial Network

Extreme Learning

Convolution Neural Network

U-Net

Residual Net

Fractionally Strided Convolution

Recurrent Network