Sign in to follow this  
johnnyBravo

Is there a type of Neural Network able to do this?

Recommended Posts

Hi, I am creating a spaceship game, and I have ships with thrusters strategically placed around the ship to allow strafing, rolling, pitching etc. And I was wondering if it would be possible to use some kind of network where I could input the linearVelocity and angularVelocity vectors into the network, which would be 6 input neurons. and the output the amount of thrust(probably in percentage form, 0.0 to 1.0) for each thruster on the ship. So there would be a output neuron per thruster. And to get the error amount, you get the difference between the 'change in the ship's velocities' and the 'inputted velocities'. eg: veloctiesDifference = shipNewVelocities - shipOldVelocities - inputtedVelocties; So I'm guessing I would just use a feedforward network for this, but does anyone have any ideas on how I would train it? I don't think backpropagation could be used here. Thanks btw I know someone else posted a question like this a while ago, but they didn't end up using an ANN.

Share this post


Link to post
Share on other sites
Provided you can create a good fitness function (a fn that is able to tell you if one ship is performing better than another) you can use a genetic algorithm to train the ANN. See my website for more detail.

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
The problem here is that your "error" isn't really an error at all. The error in the backpropagation algorithm is supposed to be the difference between the desired output and the actual output. It is used to determine how to shift the weights to move the output closer to the desired output.

If you don't know what you want your function to output, a nueral network using backpropagation won't be able to figure it out either.

Share this post


Link to post
Share on other sites
Quote:
Original post by johnnyBravo
And I was wondering if it would be possible to use some kind of network where I could input the linearVelocity and angularVelocity vectors into the network, which would be 6 input neurons.


Do you intend to feed the network the curent linearVelocity and angularVelocity, or the desired linearVelocity and angularVelocity? This will make a difference to your network architecture and training scheme.

Timkin

Share this post


Link to post
Share on other sites
Why bother with the complexity, overhead, training, untweakability, and inefficiency of an ANN? This is well-trodden ground for control systems.

Share this post


Link to post
Share on other sites
Quote:
Original post by Sneftel
Why bother with the complexity, overhead, training, untweakability, and inefficiency of an ANN? This is well-trodden ground for control systems.


I agree absolutely... but the problem around here Sneftel is that people typically pick up a hammer and then start trying to make all their problems look like nails!

Share this post


Link to post
Share on other sites
Quote:
Original post by Timkin
Quote:
Original post by Sneftel
Why bother with the complexity, overhead, training, untweakability, and inefficiency of an ANN? This is well-trodden ground for control systems.


I agree absolutely... but the problem around here Sneftel is that people typically pick up a hammer and then start trying to make all their problems look like nails!


hehe, yeah! They do have a habit of doing that.

There is one thing to point out though: If you just want to learn about ANNs then it helps to get some practical experience trying to solve *any* kind of problem; especially if it's fun.

Share this post


Link to post
Share on other sites
Quote:
Original post by Timkin
Quote:
Original post by johnnyBravo
And I was wondering if it would be possible to use some kind of network where I could input the linearVelocity and angularVelocity vectors into the network, which would be 6 input neurons.


Do you intend to feed the network the curent linearVelocity and angularVelocity, or the desired linearVelocity and angularVelocity? This will make a difference to your network architecture and training scheme.

Timkin



I think to feed it 'new' velocities, eg roll by 0.2

Quote:
Original post by Timkin
Quote:
Original post by Sneftel
Why bother with the complexity, overhead, training, untweakability, and inefficiency of an ANN? This is well-trodden ground for control systems.


I agree absolutely... but the problem around here Sneftel is that people typically pick up a hammer and then start trying to make all their problems look like nails!




:)

Quote:

Call me practical, but I think that problem can be solved with a good old math analysis, without the use of any learning at all.

I'm making the program more like a simulation to test my attempts at ai, so its more of a learning experience than making a game.

Quote:
Original post by fup
Provided you can create a good fitness function (a fn that is able to tell you if one ship is performing better than another) you can use a genetic algorithm to train the ANN. See my website for more detail.




I'm trying out the genetic algos right now, i think they're looking quite promising.

I read all the tutorials you had there, they were pretty good.


Quote:

Why bother with the complexity, overhead, training, untweakability, and inefficiency of an ANN? This is well-trodden ground for control systems.

I've thought about doing that, like would it be better to have the ai use them instead of me trying to evolve something to do it automatically. And that I could use state-based ai, and hard code all the things like targeting, moving to positions etc, but there are just so many nails....

edit:

actually i've been thinking of using two different ai schemes.
as there are two opposing teams in this app, iv been thinking about having one use state-based ai, and the other a bunch of genetic evolved ann's.

..might be cool if i can get the Anns ones to work.

Share this post


Link to post
Share on other sites
A Feedforward backpropagation-trained network would probably be able to solve arbitrary thruster placement and loss of use of thrusters at run-time if you turn on backpropagation any time there's significant error detected.

Let's say that you have a network designed where there are 6 input neurons with an input range of -1 to +1 each, one for each axis of the desired linear and rotation acceleration (joystick axes, basically). If you have thrusters that don't rotate, each output neuron can represent the amount of thrust that one of the thrusters should give. You might have a hidden layer with a moderate but not insane amount of nodes (maybe 15-20), since the equations that the network will be simulating are probably pretty simple.

Difficulties will arise for the backpropagation implementation; after you get the feed-forward output from a randomly weighted net, you can immediately sum the resulting torques and forces that would be generated by setting those thruster values, figure out the linear and angular acceleration from that, and then figure out how your desired acceleration differs. The problem is that it's not simply reversible - you have to determine which thrusters were helpful and which hindered the result.

Let's say that you have two groups of thrusters mounted at the end of a thin cylinder, each with four thrusters mounted in a cross (each thruster perpendicular to the centerline of the cylinder), and you want to spin the cylinder around its center (all input axes 0.0 except for one rotational axis). Let's say that at the beginning of training, the net outputs that one thruster should fire at full power, and everything else is idle. No matter what thruster fired, you would get both angular and linear acceleration. Since you desired 0.0 linear acceleration on all axes (in which case the thruster was hindering the solution), but the thruster also helped rotate in the correct direction, it becomes difficult to say whether the thruster was helping achieve the desired result or not.

It's very important to note that you cannot effectively train a backpropagation network by computing a single value of error and using that value for every neuron's error amount.

If the network accidentally fired the "best" combination (the two thrusters that would cancel linear acceleration but combine angular acceleration), then the feedback would be simple to apply.


If you wanted to avoid this problem rather than figure out a feedback system that works, you would have to isolate or group thrusters in such as way that you could determine the linear and angular parts separately. Having a thruster that thrusts directly toward or away from the center of mass would be a linear-only thruster, and having a pair of thrusters that are always firing the same amount, which you have also precomputed to always cancel each other's linear acceleration would be an example of an angular-only thruster group. A problem with the angular-only groups is that if one of the thrusters is damaged, you will have to disable its pair.


If you used a genetic training algorithm to train a network instead of backpropagation, you would completely avoid having to figure out per-thruster feedback. However, the memory use and computation power is probably far greater if you're randomly adjusting weights in the network. This might prevent you from using it at run-time to fix a network when thrusters are disabled.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this