Profile photo for Aqsa Zafar

In simple words, The activation function generates an output based on input signals. In order to understand what activation function is, you need to understand the structure of the Neural Network.

Here you can see that hidden layers are getting an input signal from the input layer.

So What activation function does?.

After getting an input signal from the input layer, in the hidden layer, we combine (Combining means weighted sum) them depending upon the nature of these input signals.

And then they are sent to one function, and that function is the Activation Function. The activation function generates output from these input signals.

So the main purpose of the activation function is to generate the output based on the input signals.

The activation function is performed on Hidden layers and in the output layer. As you can see arrows in the picture.

Now let’s move to the types of the activation function.

Types of Activation Function.

There are several types of Activation Functions, but I will discuss only the most popular and used activation function.

So the most popular activation functions are-

  1. Threshold Function.
  2. Sigmoid Function.
  3. Rectifier Function.
  4. Hyperbolic Tangent(tan h)
  5. Linear Function.

1. Threshold Activation Function-

Threshold function looks something like that-

In the threshold function, on the X-axis, you have a weighted sum. And on the Y-axis you have the values between 0 and 1. Basically, a threshold function is a very simple kind of function. The formula of threshold function is-

φ (x)={ 1 if x>=0 and 0 if x<0}

According to the threshold function, if the value is less than 0, so the threshold function passes on 0. And if the value is greater than 0 or equal to 0, then the threshold passes on 1. The threshold function is a kind of Yes/ No function. It is a very straight forward function.

2. Sigmoid Function-

Sigmoid Function Looks something like that-

The formula of the sigmoid function is-

φ (x) = 1/ 1+ e-x

In that case, x is the value of the weighted sum. This is the function used in Logistic Regression.

So what is good about this sigmoid function is? It is smooth, unlike the threshold function it doesn’t have any kinks in the curve. It is a nice and smooth gradual progression. So anything below 0 is just like drop off and above 0 acts approximates towards 1.

The sigmoid function is useful in the final layer, that is the output layer. Especially when you are trying to predict the probabilities.

3. Rectifier Function-

Rectifier Function is one of the most popular functions in artificial neural networks even though it has a kink in the curve. The formula of Rectifier Function is-

φ (x) =max(x,0)

Rectifier Function goes all the way to 0 and then from 0 it’s gradually progressing as the input value increase.

In the hidden layer, mostly rectifier function is used.

4. Hyperbolic Tangent(tan h)-

Hyperbolic Tangent is very similar to sigmoid function but hyperbolic tangent function goes below 0. So the values go from 0 to 1 and from 0 to -1 on the other side. This can be useful for some applications.

5. Linear Function-

Linear Function is very simple and easy without any conditions. The formula of a linear function is-

f(x) = a+x

Where a is bias, and x is a weighted sum. We get a linear representation( a straight line) as a result of this function. The advantage of a linear function is its simplicity.

I hope now you have a better understanding of Activation functions.

Upvote if you found it helpful.

Happy Learning!

View 9 other answers to this question
About · Careers · Privacy · Terms · Contact · Languages · Your Ad Choices · Press ·
© Quora, Inc. 2025