• ### What is your GameDev Story?

This topic is 4907 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

Is it possible the output of neural network be >1 or <0 by using sigmoid function as a layer activation function?

##### Share on other sites
Not if you've done it properly. If I was any good at calculus I could prove it to you. Try putting in the largest and smallest possible values and see what you get. If you do actually want a different output, which sounds reasonable, you'll have to scale and/or offset it, I suppose.

##### Share on other sites
Since the exponential function approaches zero as the exponent goes towards minus infinity and approaces infinity as the exponent approaches infinity, the entire expression will approach 1/(1+0)=1 and 1/(1+inf)=0.

So without scaling or using a different activation function, you cannot get an output outside [0,1].

##### Share on other sites
Are there any other functions that you could use so you can get > 1 and < 0?

##### Share on other sites
As kylotan mentioned, you can use the same function and scale or offset it depending on what values you require.

eg, Output x 100 will give a value between 0 and 100, (Output x 200) - 100 will give a value between -100 and 100, etc

##### Share on other sites
Hmm im using sigmoid for a feed forward backprobagation network,

would I multiply it like this:
output = (1/(1+exp(-(input+bias))))*100
?

And would I modify anything in the training, as in the training i got calulations like:
1-output

like would I replace 1 with 100?

edit:
as I tried this, and the calculations no longer worked properly.
thx

##### Share on other sites
An easy way to incorporate this kind of scaling is to have a linear layer after the sigmoid layer. Then with the weights of the linear layer you learn the proper scaling factor.

##### Share on other sites
JohnnyBravo, the offset/scaling thing is purely to get the end result into a form more useful for your game. Think of it as a filter for the output to make it compatible with your engine. It has no relevance to operation of the actual net itself and shouldn't play a part in the training process.

##### Share on other sites
Quote:
 Original post by johnnyBravoAre there any other functions that you could use so you can get > 1 and < 0?

Just add an output layer as a linear sum of basis function activations. If
μi is the activation value of the i'th basis function, then the j'th output is given by
yj = Σi=1niwij)
=w'μ

where w' is the transpose of the output layer weight matrix w. When you come to train the network, if you were using gradient descent, you'd want an update rule for w along the lines of
w(k+1) = w(k) + ηdE/dw, where η is the learning rate and E is your error function over the output states.

Timkin

edit: and I've just noticed an AP said this before... apologies for the double up

• ### What is your GameDev Story?

In 2019 we are celebrating 20 years of GameDev.net! Share your GameDev Story with us.

• 9
• 9
• 34
• 16
• 11
• ### Forum Statistics

• Total Topics
634123
• Total Posts
3015651
×