Your Success, Our Mission!
3000+ Careers Transformed.
After convolution, CNNs use pooling layers to simplify and stabilize the learning process.
Pooling reduces the spatial dimensions while keeping the important features.
This step helps reduce computation and prevent overfitting by ignoring unimportant variations.
After pooling, activations bring non-linearity to the network.
Without activation functions, CNNs would behave like linear models — unable to learn complex boundaries.
| Activation | Formula | Use |
|---|---|---|
| ReLU | max(0, x) | Default activation in CNNs |
| Sigmoid | 1 / (1 + e⁻ˣ) | Binary classification |
| Softmax | eˣ / Σeˣ | Multi-class output layer |
Sigmoid: Maps input values to a range between 0 and 1, useful for binary classification.
Sigmoid(x) = 1 / (1 + math.exp(-x))
ReLU: Introduces non-linearity:
ReLU(x) = max(0, x)
Tanh: Maps input values to a range between -1 and 1, helping center data around zero.
Tanh(x) = (math.exp(x) - math.exp(-x)) / (math.exp(x) + math.exp(-x))
Softmax: Converts logits into probabilities that sum to 1, ideal for multi-class classification.
output = [math.exp(i) / sum(math.exp(j) for j in inputs) for i in inputs]
Together, convolution, pooling, and activation layers form the backbone of CNNs.
Top Tutorials
Related Articles