Jump to content
• Advertisement

# About neural network!!

This topic is 4844 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

## Recommended Posts

Is it possible the output of neural network be >1 or <0 by using sigmoid function as a layer activation function?

#### Share this post

##### Share on other sites
Advertisement
Not if you've done it properly. If I was any good at calculus I could prove it to you. Try putting in the largest and smallest possible values and see what you get. If you do actually want a different output, which sounds reasonable, you'll have to scale and/or offset it, I suppose.

#### Share this post

##### Share on other sites
Since the exponential function approaches zero as the exponent goes towards minus infinity and approaces infinity as the exponent approaches infinity, the entire expression will approach 1/(1+0)=1 and 1/(1+inf)=0.

So without scaling or using a different activation function, you cannot get an output outside [0,1].

#### Share this post

##### Share on other sites
Are there any other functions that you could use so you can get > 1 and < 0?

#### Share this post

##### Share on other sites
As kylotan mentioned, you can use the same function and scale or offset it depending on what values you require.

eg, Output x 100 will give a value between 0 and 100, (Output x 200) - 100 will give a value between -100 and 100, etc

#### Share this post

##### Share on other sites
Hmm im using sigmoid for a feed forward backprobagation network,

would I multiply it like this:
output = (1/(1+exp(-(input+bias))))*100
?

And would I modify anything in the training, as in the training i got calulations like:
1-output

like would I replace 1 with 100?

edit:
as I tried this, and the calculations no longer worked properly.
thx

#### Share this post

##### Share on other sites
An easy way to incorporate this kind of scaling is to have a linear layer after the sigmoid layer. Then with the weights of the linear layer you learn the proper scaling factor.

#### Share this post

##### Share on other sites
JohnnyBravo, the offset/scaling thing is purely to get the end result into a form more useful for your game. Think of it as a filter for the output to make it compatible with your engine. It has no relevance to operation of the actual net itself and shouldn't play a part in the training process.

#### Share this post

##### Share on other sites
Quote:
 Original post by johnnyBravoAre there any other functions that you could use so you can get > 1 and < 0?

Just add an output layer as a linear sum of basis function activations. If
μi is the activation value of the i'th basis function, then the j'th output is given by
yj = Σi=1niwij)
=w'μ

where w' is the transpose of the output layer weight matrix w. When you come to train the network, if you were using gradient descent, you'd want an update rule for w along the lines of
w(k+1) = w(k) + ηdE/dw, where η is the learning rate and E is your error function over the output states.

Timkin

edit: and I've just noticed an AP said this before... apologies for the double up

#### Share this post

##### Share on other sites

• Advertisement
• Advertisement

• ### Popular Contributors

1. 1
Rutin
42
2. 2
3. 3
4. 4
5. 5
• Advertisement

• 9
• 27
• 20
• 14
• 14
• ### Forum Statistics

• Total Topics
633388
• Total Posts
3011625
• ### Who's Online (See full list)

There are no registered users currently online

×

## Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!