# Neural Nets and gaming

Here we go.. ah ... here we go... ah here we go again...

You can download a neural net demo. If it doesn't work on your computer, don't shoot me cause that's the first 3D project of mine for DirectX. It will try about 10 times before it gets it right. Oh and I do my writing on my old computer and my programmng on my fast computer, so sorry if it's slow on your computer (here is a screen shot... from my old computer).

What are neural nets? This is a question I get asked never. But I am going to provide an answer anyway. A neural net is a graph of nodes and arcs that maps inputs to outputs. Sounds like a Petri net doesn't it. Well, sounds are deceiving.

But why neural networks? Well neural networks are fairly good at learning. Like the fuzzy logic expert I explained before, they can map velocities, positions, and other states to outputs. A lot of the time they are used as classifiers (i.e. I got a lot of data and I don't know how to determine what different data means in an abstract sense). For instance, if I have a lot of temperature data from under my tongue (stay with me now) and my mom would like to know if I'm sick or just faking it. So she goes back to the kitchen and trains a neural net (no sterotypes, she does go back to the kitchen to train neural nets) on this recorded temperature data to help her decide. So after she has finished training the neural net, she records more data and puts it in the neural net and the output of the neural net says whether or not I am sick. Of course, if it does say I'm faking my illness.. I lecture her and say you didn't use enough data to train your neural net. Then she hits me over the head and I have to do work. Anyway you get the idea.

Another way to use a Neural Net is to use it to determine a mapping or function. A type of neural net can be shown to be a "universal approximator" which is kind of like a "universal soldier" but dealing with functions. Actually, what it means is that it can be trained to represent arbitrary continuous and differentiable functions which in laymens terms means "a bunch of pretty numbers in a graph form." Some problems are: we don't know how many nodes to use and training the neural network too much can lead to overtraining.

Here are two game possiblities for a neural network just for example:

A typical multilayer neural network is composed of three layers: an input layer, a hidden layer, and an output layer.

This guy is going to learn through learning (what the heck did I just say--I think I meant a learning algorithm). A type of learning called supervised, reinforcement learning (I should use some reinforcement learning on my grammar). Here is a picture demonstrating the process.

Let take the game lunar lander and use a neural network to determine when to fire the thrust based on position and velocity measurements. In lunar lander, we wish land our space craft on the ground smoothly. We shall pretend we are perfectly above the ground platform we are landing on with no horizontal movement. We shall use one of the simplest types of neural network classifiers called a Perceptron (no relation to Robotron).

Lets assume that the UN shuttle is governed by the following equations:

acceleration = -9.8 meters per second + thrust

new velocity = last measured velocity + acceleration*deltat

new position = last measured position + last measured velocity*deltat

i.e. gravity (which is oddly similar to Earth) is pulling down on the rocket.

deltat we will set at 0.1 seconds (i.e. we are taking 0.1 seconds per iteration of these equations--this is an approximation -- look up Euler differential equations approximation stepsize on Google or something).

(Note we could actually make the model for the physics of the shuttle through a neural network, but we are using first principles of physics instead)

While the current US administration may train our neural network another way, we shall try to teach Dr. Perceptron to land the UN peace shuttle smoothly. Here is a picture of what our perceptron model looks like.

So the equation is:

thrust switch = hardlimiter(position*w1 + velocity*w2 + biasterm*w3)

thrust = (thrust switch which is 0 or 1)*30

The hardlimiter is just a function which takes negative numbers to 0, positive numbers to 1, and 0 to 1

The goal in perception learning is trying to determine the best w1, w2, and w3 (the weights). By best weights I mean the weights that produce the minimum error when the neural network is training.

So how do we get the data to train the neural network. You could write a few data points down like this:

Position Velocity ThrustSwitch

100.0000 0.000000 0.0000000000

100.0000 -10.0000 1.0000000000

100.0000 10.00000 0.0000000000

50.0000 -1.000000 0.0000000000

10.0000 -5.000000 1.0000000000

Then train your neural net on this. But you say, this is not a lot of data. That's the beauty of neural networks, it's ability to generalize on small data sets. We probably need a few more examples, but hey, lets just say we did and don't.

Another way is to play the game yourself and train the neural network while you play. Yet another way and probably better, is to record position, velocity, and thrust switch while you are playing to a file and keep the data on a successful landing--then train the neural network offline.

We shall use this data as an example:

Position Velocity ThrustSwitch

100.0000 0.000000 0.0000000000

100.0000 -10.0000 1.0000000000

100.0000 10.00000 0.0000000000

50.0000 -1.000000 0.0000000000

10.0000 -5.000000 1.0000000000

The learning algorithm to update the weights is:

error = (thrustswitchfromdata-thrustswitchfromdrperceptron)

The bias term is always 1. The learningrate is usually something small like 0.1 or 0.01, etc. Depending on how fast you want your neural network to converge (however, if your too fast--big learning rate, it may not converge). k is an iteration count k=0,1,2,..... no multipication.

=======================

So lets now go through some computation example:

We start with random weights:

w1 = 0.02

w2 = 0.22

w3 = -0.53

learning rate = 0.01

Let's take the first data vector:

Position Velocity ThrustSwitch

100.0000 0.000000 0.0000000000

Now we put it in our neural network and get:

thrust switch = hardlimiter(0.02*100+0.22*0 + (-0.53)*0) =hardlimiter(5) = 1

We now do the learning:

error = 0.00000 - 1 = -1

[w1].......[0.02]...................[100]

[w2](k).=..[0.22].+.(0.01)*(-1)*[0]

[w3].......[-0.53]..................[1]

which gives

w1 = 0.02+(0.01)*(-1)*100 = 0.02 - 1 = -0.98

w2 = 0.22 + (0.01)*(-1)*0 = 0.22

w3 = -0.53 + (0.01)*(-1)*1 = -0.54

==========================

Now we go on to the next data vector:

Position Velocity ThrustSwitch

100.0000 -10.0000 1.0000000000

We put it in our neural network and get:

thrust switch = hardlimiter(-0.98*100+0.22*(-10) + (-0.54)*1) =hardlimiter(-100.74) = 0

We now do the learning:

error = 1.000000000 - 0 = 1

[w1]........[-0.98]...................[100]

[w2](k+1).=.[0.22].+.(0.01)*(1)*[-10]

[w3]........[-0.54]..................[1]

which gives

w1 = -0.98+(0.01)*(1)*100 = -0.96

w2 = 0.22 + (0.01)*(1)*(-10) = 0.12

w3 = -0.53 + (0.01)*(1)*1 = -0.52

Nice, the weights seem to be settling down. But we continue for all the rest of the data set and add up all the (absolute value of the) errors we have calculated.

After we have completed, we check the error... on the first pass it's usually bad (we want zero). If it is not sufficient, we start at the top of our data set again and do the whole process over again, with our new weights of course (no need to start from scratch again with random weights--that's where all the learning is) Once the error is good enough we stop the process and save the weights. That's it, we have trained our neural network and hopefully Dr. Perceptron will provide the correct thrust activation during certain positions and velocities of the UN shuttle.. or someone might be happy.

To summarize the steps:

1. Set up the network structure (you can add more inputs and outputs--more than one switch)

2. Gather some data.

3. Start with some initial random weights (usually between -1 and 1).

4. Perform the training until the neural network error on the data set is sufficiently small or zero.

5. Use the trained Neural network in the game.

Well I hope this was useful. You should note, that this is a very simple neural network and there are others are out there. Hopefully, this will get you started.

Are you seeing similiarities between how a fuzzy logic and neural network can be used? They both can do classification and control. We are developing a database of AI tools we can use for later. If there are mistakes or something is unclear, I appoligize (notice the clever spelling so I'm not really apologizing). My neural network wires are loose sometimes.

Why would you want to use this? Well, 3 weights are sure easier to store than 100000 if-then statements. Plus it's simple to compute. If you didn't like the behavior, you could always update the weights instead of hard coded if-then statements.

If you found this confusing, go here.

If you want to know more about other types of neural networks go here.

I once had some students (Yes I was a TA once, scary isn't it), and they used a camera to measure the state of a checker board. They then used a neural network to determine the computers move. A robot arm would then pick up a piece and move it in the proper position. It was a blast.

You can download a neural net demo. If it doesn't work on your computer, don't shoot me cause that's the first 3D project of mine for DirectX. It will try about 10 times before it gets it right. Oh and I do my writing on my old computer and my programmng on my fast computer, so sorry if it's slow on your computer (here is a screen shot... from my old computer).

What are neural nets? This is a question I get asked never. But I am going to provide an answer anyway. A neural net is a graph of nodes and arcs that maps inputs to outputs. Sounds like a Petri net doesn't it. Well, sounds are deceiving.

But why neural networks? Well neural networks are fairly good at learning. Like the fuzzy logic expert I explained before, they can map velocities, positions, and other states to outputs. A lot of the time they are used as classifiers (i.e. I got a lot of data and I don't know how to determine what different data means in an abstract sense). For instance, if I have a lot of temperature data from under my tongue (stay with me now) and my mom would like to know if I'm sick or just faking it. So she goes back to the kitchen and trains a neural net (no sterotypes, she does go back to the kitchen to train neural nets) on this recorded temperature data to help her decide. So after she has finished training the neural net, she records more data and puts it in the neural net and the output of the neural net says whether or not I am sick. Of course, if it does say I'm faking my illness.. I lecture her and say you didn't use enough data to train your neural net. Then she hits me over the head and I have to do work. Anyway you get the idea.

Another way to use a Neural Net is to use it to determine a mapping or function. A type of neural net can be shown to be a "universal approximator" which is kind of like a "universal soldier" but dealing with functions. Actually, what it means is that it can be trained to represent arbitrary continuous and differentiable functions which in laymens terms means "a bunch of pretty numbers in a graph form." Some problems are: we don't know how many nodes to use and training the neural network too much can lead to overtraining.

Here are two game possiblities for a neural network just for example:

A typical multilayer neural network is composed of three layers: an input layer, a hidden layer, and an output layer.

This guy is going to learn through learning (what the heck did I just say--I think I meant a learning algorithm). A type of learning called supervised, reinforcement learning (I should use some reinforcement learning on my grammar). Here is a picture demonstrating the process.

Let take the game lunar lander and use a neural network to determine when to fire the thrust based on position and velocity measurements. In lunar lander, we wish land our space craft on the ground smoothly. We shall pretend we are perfectly above the ground platform we are landing on with no horizontal movement. We shall use one of the simplest types of neural network classifiers called a Perceptron (no relation to Robotron).

Lets assume that the UN shuttle is governed by the following equations:

acceleration = -9.8 meters per second + thrust

new velocity = last measured velocity + acceleration*deltat

new position = last measured position + last measured velocity*deltat

i.e. gravity (which is oddly similar to Earth) is pulling down on the rocket.

deltat we will set at 0.1 seconds (i.e. we are taking 0.1 seconds per iteration of these equations--this is an approximation -- look up Euler differential equations approximation stepsize on Google or something).

(Note we could actually make the model for the physics of the shuttle through a neural network, but we are using first principles of physics instead)

While the current US administration may train our neural network another way, we shall try to teach Dr. Perceptron to land the UN peace shuttle smoothly. Here is a picture of what our perceptron model looks like.

So the equation is:

thrust switch = hardlimiter(position*w1 + velocity*w2 + biasterm*w3)

thrust = (thrust switch which is 0 or 1)*30

The hardlimiter is just a function which takes negative numbers to 0, positive numbers to 1, and 0 to 1

The goal in perception learning is trying to determine the best w1, w2, and w3 (the weights). By best weights I mean the weights that produce the minimum error when the neural network is training.

So how do we get the data to train the neural network. You could write a few data points down like this:

Position Velocity ThrustSwitch

100.0000 0.000000 0.0000000000

100.0000 -10.0000 1.0000000000

100.0000 10.00000 0.0000000000

50.0000 -1.000000 0.0000000000

10.0000 -5.000000 1.0000000000

Then train your neural net on this. But you say, this is not a lot of data. That's the beauty of neural networks, it's ability to generalize on small data sets. We probably need a few more examples, but hey, lets just say we did and don't.

Another way is to play the game yourself and train the neural network while you play. Yet another way and probably better, is to record position, velocity, and thrust switch while you are playing to a file and keep the data on a successful landing--then train the neural network offline.

We shall use this data as an example:

Position Velocity ThrustSwitch

100.0000 0.000000 0.0000000000

100.0000 -10.0000 1.0000000000

100.0000 10.00000 0.0000000000

50.0000 -1.000000 0.0000000000

10.0000 -5.000000 1.0000000000

The learning algorithm to update the weights is:

error = (thrustswitchfromdata-thrustswitchfromdrperceptron)

The bias term is always 1. The learningrate is usually something small like 0.1 or 0.01, etc. Depending on how fast you want your neural network to converge (however, if your too fast--big learning rate, it may not converge). k is an iteration count k=0,1,2,..... no multipication.

=======================

So lets now go through some computation example:

We start with random weights:

w1 = 0.02

w2 = 0.22

w3 = -0.53

learning rate = 0.01

Let's take the first data vector:

Position Velocity ThrustSwitch

100.0000 0.000000 0.0000000000

Now we put it in our neural network and get:

thrust switch = hardlimiter(0.02*100+0.22*0 + (-0.53)*0) =hardlimiter(5) = 1

We now do the learning:

error = 0.00000 - 1 = -1

[w1].......[0.02]...................[100]

[w2](k).=..[0.22].+.(0.01)*(-1)*[0]

[w3].......[-0.53]..................[1]

which gives

w1 = 0.02+(0.01)*(-1)*100 = 0.02 - 1 = -0.98

w2 = 0.22 + (0.01)*(-1)*0 = 0.22

w3 = -0.53 + (0.01)*(-1)*1 = -0.54

==========================

Now we go on to the next data vector:

Position Velocity ThrustSwitch

100.0000 -10.0000 1.0000000000

We put it in our neural network and get:

thrust switch = hardlimiter(-0.98*100+0.22*(-10) + (-0.54)*1) =hardlimiter(-100.74) = 0

We now do the learning:

error = 1.000000000 - 0 = 1

[w1]........[-0.98]...................[100]

[w2](k+1).=.[0.22].+.(0.01)*(1)*[-10]

[w3]........[-0.54]..................[1]

which gives

w1 = -0.98+(0.01)*(1)*100 = -0.96

w2 = 0.22 + (0.01)*(1)*(-10) = 0.12

w3 = -0.53 + (0.01)*(1)*1 = -0.52

Nice, the weights seem to be settling down. But we continue for all the rest of the data set and add up all the (absolute value of the) errors we have calculated.

After we have completed, we check the error... on the first pass it's usually bad (we want zero). If it is not sufficient, we start at the top of our data set again and do the whole process over again, with our new weights of course (no need to start from scratch again with random weights--that's where all the learning is) Once the error is good enough we stop the process and save the weights. That's it, we have trained our neural network and hopefully Dr. Perceptron will provide the correct thrust activation during certain positions and velocities of the UN shuttle.. or someone might be happy.

To summarize the steps:

1. Set up the network structure (you can add more inputs and outputs--more than one switch)

2. Gather some data.

3. Start with some initial random weights (usually between -1 and 1).

4. Perform the training until the neural network error on the data set is sufficiently small or zero.

5. Use the trained Neural network in the game.

Well I hope this was useful. You should note, that this is a very simple neural network and there are others are out there. Hopefully, this will get you started.

Are you seeing similiarities between how a fuzzy logic and neural network can be used? They both can do classification and control. We are developing a database of AI tools we can use for later. If there are mistakes or something is unclear, I appoligize (notice the clever spelling so I'm not really apologizing). My neural network wires are loose sometimes.

Why would you want to use this? Well, 3 weights are sure easier to store than 100000 if-then statements. Plus it's simple to compute. If you didn't like the behavior, you could always update the weights instead of hard coded if-then statements.

If you found this confusing, go here.

If you want to know more about other types of neural networks go here.

I once had some students (Yes I was a TA once, scary isn't it), and they used a camera to measure the state of a checker board. They then used a neural network to determine the computers move. A robot arm would then pick up a piece and move it in the proper position. It was a blast.

Sign in to follow this

Followers
0

## 0 Comments

## Recommended Comments

There are no comments to display.

## Create an account or sign in to comment

You need to be a member in order to leave a comment

## Create an account

Sign up for a new account in our community. It's easy!

Register a new account## Sign in

Already have an account? Sign in here.

Sign In Now