Jump to content
• Advertisement  # nir143

Member

10

100 Neutral

## About nir143

• Rank
Member
1. I think you are confusing genes and chromosomes. Is this the cars that are used to generate the next generation? If very few or none of the other cars reach 95% fitness you will get very low genetic diversity which is bad. Using something like tournament selection all cars can still be chosen but with different probabilities. yes,all the 'best' cars use for the next generation crossover I don't get this. Crossover is used to swap some genes between two chromosomes. if the program decide to do crossover to this car(the crossover rate is 0.5~) so for each weight in the network the new 'child' randomize from which parent he will "take" this weight When you say +\- 30% you mean in the range [-30%, 30%]? Not sure but 30% sounds kind of much. yea,the program decide if it should reduce or add to this weight(randomally 50% each),then i add\reduce random.nextdouble()*(val/3) which is 33%. What happens if the car has been colliding since last step but is not currently colliding? Will that still affect the fitness? If you fail here it can explain why they don't learn to avoid walls. this situation cant be,because it the car collides i dont allow to move the car into the wall, so from the step the car collide ,it will continue to collide till end of the generation, how i can improve this maybe? Are you taking the road direction into account so that you don't reward movement in the wrong direction? atm i just have blank screen and im trying to 'teach' the cars avoid the walls. Is there one sensor at the front,back,left and right of the car? I have no idea how your track looks like but imagine the car driving on a straight road that has a 90° turn later on. No sensor will be able to see the turn until the car is already in the turn. Could this be too late so a crash is unavoidable? Maybe you need more sensors to be able to learn better. [/quote] good idea,ill add more to sensors at the front of the car ty for your help:)
2. hi, im trying to program a racing game,that inside the computer car will learn the road with neural network. i try to use also genetic algorithm to train the network in order to learn that better:) i defined 150 steps each generation and 100 cars each generation. each car(gene) has the weights of the network as his data. each generation i set the best genes of the generation (the best and all the genes that their fitness is atleast 95% of the first). the crossover method is choose each weight from the "parents"(best of the last generation). the mutation method is select one of the weights and change him by +\- 30% of his value. each generation has 150 steps ,after each step i add\reduce to the genes fitness. atm i just have clear screen(half of the normal XNA screen). if the car collide the walls i reduce 1 to the fitness, if the car move normally(without collide) i add the ratio of the moved vector from the highest the can be, lets say that the car moved half meter and the car speed is one meter so i add 0.5/1-> moved/carspeed. each car has four sensors which represent the distance from each wall. if the wall is out of the sensor range the sensor is "-1", else the sensor is the distance of the wall divide by the maximum range =(width+height)/3. as i can see the car learn to move much more area as the generations moves,but it cant learn avoid the walls:S at generation 15 it start touching the walls,but then its just get faster and faster but still collide the walls:S(the best car of each generation). there is something wrong in my general idea? tyvm for your help:)
3. yea,ive just forget to write a line where : [color="#1C2837"][color="#00008B"]foreach[color="#000000"] neural [color="#00008B"]in[color="#000000"] the nextlayer sum[color="#000000"]+=[color="#000000"]neural[color="#000000"].[color="#000000"]value[color="#000000"]*[color="#000000"]currentneural[color="#000000"].[color="#000000"]weights[color="#000000"][[color="#000000"]neural[color="#000000"]]; [color="#1C2837"][color="#000000"] [color="#1C2837"][color="#000000"]myerror=sum*myoutput*(1-myoutput); [color="#1C2837"][color="#000000"] [color="#1C2837"][color="#000000"]its seems like when im training the normal 4 examples, each 2 example with different output just contradict each other(change the weights in opposite directions) :S [color="#1C2837"][color="#000000"]
4. im using logistic activation function-sigmoid curve. ive try what you said with 0.9\0.1 instead of 1\0 and removing the bias but the output still aim to be 0.5:S
5. ive try also to use genetic algorithm instead of back-propagation and they output aim to be 0.5:S that means the problem should be in the feed-forward formulas?
6. im not strong in this kind of math:S but ive followed few guides about the formulas and about the theory of course and as i checked the derivative function is correct, i try to give as much details as i can to make the formulas\code understandable:) I went through the code few times and still dont understand where the problem:S
7. ive try some tests,as i said if im using only one training example everything works fine(atleast aim to 0.04 mistake after 600 trains). i dont understand what you advice me to do, change the weights little bit?:S(that created randomally),i think i misunderstood you, the 4-5 formulas i worte here are ok fine right? tyvm for your help:)
8. [font=Arial,]after reading some articles about neural network(back-propagation) i try to write a simple neural network by myself. ive decided XOR neural-network, my problem is when i am trying to train the network, if i use only one example to train the network,lets say 1,1,0(as input1,input2,targetOutput). after 500 trains +- the network answer 0.05. but if im trying more then one example (lets say 2 different or all the 4 possibilities) the network aims to 0.5 as output i searched in google for my mistakes with no results :S ill try to give as much details as i can to help find what wrong: -ive tried networks with 2,2,1 and 2,4,1 (inputlayer,hiddenlayer,outputlayer). -the output for every neural defined by: [color=#00008B]double[color=#000000] input [color=#000000]=[color=#000000] [color=#800000]0.0[color=#000000];[color=#000000] [color=#00008B]for[color=#000000] [color=#000000]([color=#00008B]int[color=#000000] n [color=#000000]=[color=#000000] [color=#800000]0[color=#000000];[color=#000000] n [color=#000000]<[color=#000000] layers[color=#000000][[color=#000000]i[color=#000000]].[color=#2B91AF]Count[color=#000000];[color=#000000] n[color=#000000]++)[color=#000000] input [color=#000000]+=[color=#000000] layers[color=#000000][[color=#000000]i[color=#000000]][[color=#000000]n[color=#000000]].[color=#2B91AF]Output[color=#000000] [color=#000000]*[color=#000000] weights[color=#000000][[color=#000000]n[color=#000000]];[color=#000000] while 'i' is the current layer and weight are all the weights from the previous layer. -the last layer(output layer) error is defined by: [color=#000000]value[color=#000000]*([color=#800000]1[color=#000000]-[color=#000000]value[color=#000000])*([color=#000000]targetvalue[color=#000000]-[color=#000000]value[color=#000000]);[color=#000000] while 'value' is the neural output and 'targetvalue' is the target output for the current neural. -the error for the others neurals define by: [color=#00008B]foreach[color=#000000] neural [color=#00008B]in[color=#000000] the nextlayer sum[color=#000000]+=[color=#000000]neural[color=#000000].[color=#000000]value[color=#000000]*[color=#000000]currentneural[color=#000000].[color=#000000]weights[color=#000000][[color=#000000]neural[color=#000000]];[color=#000000] -all the weights in the network are adapt by this formula(the weight from neural -> neural 2) [color=#000000]weight[color=#000000]+=[color=#2B91AF]LearnRate[color=#000000]*[color=#000000]neural[color=#000000].[color=#000000]myvalue[color=#000000]*[color=#000000]neural2[color=#000000].[color=#000000]error[color=#000000];[color=#000000] while LearnRate is the nework learning rate(defined 0.25 at my network). -the biasweight for each neural is defined by: [color=#000000]bias[color=#000000]+=[color=#2B91AF]LearnRate[color=#000000]*[color=#000000]neural[color=#000000].[color=#000000]myerror[color=#000000]*[color=#000000]neural[color=#000000].[color=#2B91AF]Bias[color=#000000];[color=#000000] bias is const value=1. that pretty much all i can detail, as i said the output aim to be 0.5 with different training examples thank you very very much for your help . [/font]
• Advertisement
×

## Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!