Dense (Fully Connected) Layers Explained

Carla Martins
3 min readJun 22, 2023
Photo by Alina Grubnyak on Unsplash

Dense layers are called ‘fully connected’ because each neuron in a dense layer is connected to every neuron in the previous layer. This is in contrast to convolutional layers where neurons are connected to a small region of the input. You can learn more about convolutional layers here:

https://medium.com/@cdanielaam/a-gentle-introduction-to-convolution-neural-networks-cnn-9455dfda49be

But before we proceed, we need to understand the concept of ‘flattening’ or ‘reshaping’.

In the context of CNN, flattening (or reshaping) is the process of converting a multi-dimensional tensor to a one-dimensional array.

You may also want to read my previous article about pooling layers.

https://medium.com/@cdanielaam/pooling-layer-explained-and-its-importance-in-cnn-41cf1304ddfc

Suppose we have an output from a previous pooling layer that is 11x11x98. To prepare the data for the dense layer we need to flatten this tensor into a one-dimensional vector while preserving the total number of elements. In this case, we reshape the tensor into a vector just by flattening the data as 11x11x98 = 11858. With this operation, we obtain a vector with 11858 elements that will be the…

--

--

Carla Martins
Carla Martins

Written by Carla Martins

Compulsive learner. Passionate about technology. Speaks C, R, Python, SQL, Haskell, Java and LaTeX. Interested in creating solutions.

No responses yet