Jump to content
• Advertisement

# basic neural network

This topic is 2948 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

[font=Arial,]

## thank you very very much for your help . [/font]

Advertisement
anyone may help?

#### Share this post

##### Share on other sites
Debugging your problem is hard to do (especially without access to the code itself), and I am not willing to do it for you. But I can tell you how I would go about doing it.

Do you understand why the update rules are what they are? They are supposed to be taking one little step in gradient descent. The specific formulas you use are ways of computing the derivative of the error function (typically the square of the difference between the output and the desired output) with respect to each weight. You can try to change the weights a tiny bit, measure the change in the error function and see if that matches your computation.

#### Share this post

##### Share on other sites
ive try some tests,as i said if im using only one training example everything works fine(atleast aim to 0.04 mistake after 600 trains).
i dont understand what you advice me to do,
change the weights little bit?:S(that created randomally),i think i misunderstood you,
the 4-5 formulas i worte here are ok fine right?

tyvm for your help:)

#### Share this post

##### Share on other sites

ive try some tests,as i said if im using only one training example everything works fine(atleast aim to 0.04 mistake after 600 trains).
i dont understand what you advice me to do,
change the weights little bit?:S(that created randomally),i think i misunderstood you,
the 4-5 formulas i worte here are ok fine right?

tyvm for your help:)

This is what I mean by changing the weights a little bit. After you evaluate the network, pick a weight, add 0.001 to it and evaluate it again. If you consider the error function as a function of the value of the weight, you can now compute (f(w+0.001)-f(w))/0.001 , which should be close to the derivative. I don't know if the formulas you posted are correct, but the way I would figure out if they are correct is by trying to understand them as a gradient-descent step, which involves computing that derivative.

#### Share this post

##### Share on other sites
im not strong in this kind of math:S
but ive followed few guides about the formulas and about the theory of course and as i checked the derivative function is correct,
i try to give as much details as i can to make the formulas\code understandable:)
I went through the code few times and still dont understand where the problem:S

#### Share this post

##### Share on other sites
ive try also to use genetic algorithm instead of back-propagation and they output aim to be 0.5:S
that means the problem should be in the feed-forward formulas?

#### Share this post

##### Share on other sites
Try using 0.1 instead of 0, and 0.9 instead of 1, for your inputs and outputs. What is your activation function?Also, try removing the bias nodes for now to see if that makes a difference.

#### Share this post

##### Share on other sites
im using logistic activation function-sigmoid curve.

ive try what you said with 0.9\0.1 instead of 1\0 and removing the bias but the output still aim to be 0.5:S

#### Share this post

##### Share on other sites

• Advertisement
• ### Game Developer Survey

We are looking for qualified game developers to participate in a 10-minute online survey. Qualified participants will be offered a \$15 incentive for your time and insights. Click here to start!

• Advertisement

• ### Popular Now

• 16
• 15
• 9
• 11
• 15
• Advertisement
×

## Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!