This is a brief introduction to neural networks. We will start below by comparing the traditional machine learning pipeline to the neural network pipeline. We will then discuss perceptrons, multiple perceptrons, bias implementation, composition, non-linear activation, and convolution neural networks.
Here is a common machine learning pipeline. Each bullet lists the manual steps that must be taken to build a classifier. The goal of neural networks is to automate these steps.
Image formation - Manually capturing photos for database
Filtering - Hand designed gradients and transformation kernels
Feature points - Hand designed feature descriptors
Dictionary building - Hand designed quantization and compression
Classifier - Not hand designed, learned by the model
Goal of Neural Networks: To build a classifier to automatically learn [2-4]
Compositionality: For images, an image is made up of parts, and putting these parts together creates a representation.
Perceptrons
Neural networks are based on biological neural nets.
For linear classifiers, we formulate a binary output (classifier) based on a vector of weights and a bias
Example: For a pixel image, we can vectorize the image into a matrix. The dimensions of our variables will be:
(scalar)
Multiple Perceptrons
For a multi-class classification problem, we can add more perceptrons as above. We then pass in each input value to each perceptron.

Example: For a pixel image, we can vectorize the image into a matrix. But now we have 10 classes. The dimensions of our variables will be:
(vector)
Bias implementation
To implement bias, we must add a dimension to each input vector. This input value should be consistent between perceptrons and input vectors, usually just a at the start or end of a vector. This increased dimensionality, adds a weight to our perceptron, and this extra is the bias, of the perceptron.

Composition
The goal of composition is to attempt to represent complex functions as a composition of smaller functions. Compositional allows for hierarchical knowledge.
The output vector per perception of one layer must have equal dimension of the input vector to the next perceptron layer.
This is also known as multi-layer perceptron. The perceptron layers between the initial input and final output are known as hidden layers. Usually, deeper composition with more hidden layers gives better performance, and these deeper compositions are known as deep learning.
Non-linear activation
Because our perceptron layers are linear functions, we could reduce these layers to a singular function, which isn’t very helpful. In other works, a multi-layer perceptron neural network (NN) could be simplified to a single-layer perceptron NN if the layers are linear. A non-linear activation function introduces non-linearity to the neural networks.
We can introduce a non-linear activation function to transform our features.
Example non-linear activation function (Sigmoid):
Rectified Linear Unit (ReLU)
A popular non-linear activation function:

ReLU layers allow for locally linear mapping and solves the vanishing gradients issue. The vanishing gradients issue occurs when gradients dimish while training a deep learing model, and is often dependent on the activation function.
Here is a fun visual for activation functions and hidden layers.
Convolution Neural Networks (CNNs)
It is too computationally expensive to train neural networks on vectorized images. Instead we have to use convolution.
Convolution works by sliding a kernel over an image. For each neuron, it learns its own filter (kernel) and convolve it with the image. The result of this convolution process is a feature map.
This is known as a convolution neural network. We decide how many filters and layers to train.
This original convolution function is transformed to
where layer number, kernel size, # of channels (input) or filters (depth)
Learn More
If you are interested in learning more about neural networks, I recommend reading my article on neural network layers!