Jump to content

  • Log In with Google      Sign In   
  • Create Account

Neural Network - Discussion


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
103 replies to this topic

#61 Daerax   Members   -  Reputation: 1207

Like
0Likes
Like

Posted 29 July 2008 - 05:28 PM

Quote:
Original post by Timkin
Oh, just a quick response to kirkd... if it weren't for Kolmogorov, who built on Markov's work, we wouldn't have our modern information age (including computers, electronic and photonic communications systems, the internet, etc)! So, it's probably a little unreasonable to suggest that Markov's work sat idle for 100 years until applications of Markov chains arose in speech recognition! ;)

Cheers,

Timkin


Would be nice to be able to read what kirkd said. I do not agree with that. Perhaps there would have been stalls and things might have ended up with slightly different notations like in say complexity theory, perhaps a different way of doing probabilities but I am certain that no one man has had so much impact since Aristotle. There were simply too many people approaching the notion of computing from many angles including off the top of my head: Russell, Church, Haskell Curry, McCarthy, Turing, von neuman, John backus...

Interestingly what does owe alot to AI is programming languages, lots of things considered powerful today were already being used back in the day to make AI programs easier to tackle: functional programming, relational/logic programming and constraint solving, object oriented programming, module concept and dynamic dispatch to name a few.

Sponsor:

#62 kirkd   Members   -  Reputation: 505

Like
0Likes
Like

Posted 30 July 2008 - 12:36 PM

Daerax,

I apologize, however, I deleted my original response rather than leave what was considered to be misinformation in place. What I had originally said was the Markov developed the basis of Hidden Markov Models in the 1880s but didn't find much practical application until the 1990s with speech recognition and bioinformatics. It was not my intent to suggest it found no usage, but rather that a technology that found limited practical application for 100 years could be applied to a modern problem.

-Kirk



#63 Rockoon1   Members   -  Reputation: 104

Like
0Likes
Like

Posted 31 July 2008 - 06:41 AM

Quote:
Original post by InnocuousFox
NNs can pretty much handle only a static time slice and don't handle looking ahead (or behind) very well. That makes any sort of planning algorithm a little muddy.


I am not sure I understand your reasoning for this statement.

NN's are nothing more than function approximators, and as such has no extra limits attached to them. The reason for NN's is machine learning the functions to begin with when you don't have the information necessary to simply map input(s) to output(s) using more traditional methodolgies.

If you want to evolve a NN to look ahead or behind, give it some feedback nodes (outputs that are used specifically for inputs on the next query.) This does put a constraint on training methodology as far as I am aware (can't backprop train a node without an error metric for it.) A GA training approach is well suited to this kind of setup.

Once a NN is suitable (or any function approximation methodology), it can usualy be easily converted to lookup tables + interpolation, a simple and efficient function approximator that is also easily tweaked by hand.


#64 IADaveMark   Moderators   -  Reputation: 2535

Like
0Likes
Like

Posted 31 July 2008 - 09:33 AM

First, in order to read non-current data (e.g. for detecting trends), you would have to assign an input to "what came before". In a sequential action, this is fine because you can look at whatever data structure you are using to keep track of "what came before" (e.g. a list) and say (current - n) or whatever. However, if you are looking at time sensitive things, e.g. "things within the past 5 minutes", you have to get a little more clever. For example, in doing stock market analysis (a common example) you have to have inputs for "yesterday's closing price", "last week's closing price", "last month's closing price", or whatever time scale you find necessary. The more of those you throw in there, that is that many more inputs you need to account for. Each of those is subject to glitches and spikey local minima/maxima.

More to the point, however, NNs are good for creating an output based on the current state of the inputs. You can't as easily start playing with things such as a Minimax-style problem or something as complex as a plan of actions in a sequence.

More later... I'm at a client site.
Dave Mark - President and Lead Designer of Intrinsic Algorithm LLC

Professional consultant on game AI, mathematical modeling, simulation modeling
Co-advisor of the GDC AI Summit
Co-founder of the AI Game Programmers Guild
Author of the book, Behavioral Mathematics for Game AI

Blogs I write:
IA News - What's happening at IA | IA on AI - AI news and notes | Post-Play'em - Observations on AI of games I play

"Reducing the world to mathematical equations!"

#65 Timkin   Members   -  Reputation: 864

Like
0Likes
Like

Posted 31 July 2008 - 01:43 PM

Some responses to recent comments...

Original post by Daerax
Quote:

I do not agree with that. Perhaps there would have been stalls and things might have ended up with slightly different notations like in say complexity theory, perhaps a different way of doing probabilities but I am certain that no one man has had so much impact since Aristotle. There were simply too many people approaching the notion of computing from many angles including off the top of my head: Russell, Church, Haskell Curry, McCarthy, Turing, von neuman, John backus...


Most of what arose in western engineering (particularly telecommunications and control) and subsequently computing from the 40s onward was based directly on the understanding of stochastic processes developed by the Russian-Germanic alliance of the late 19th and early 20th century. Generally speaking, western scientists and mathematicians were simply no where near the level needed to create this understanding. There is countless evidence of advances in western engineering and computing being directly based on Russian publications, or on Western scientists having spent time with their foreign counterparts, bringing back the knowledge with them.

During the latter half of the 19th century and into the 20th, there is a single strong thread of Russian mathematicians, predominantly coming from the same school at Moscow State University. The mathematics group there was pivotal to the developments of the time. Everything that came later in this area can be shown to have grown from the knowledge developed by this one group. Kolmogorov was one of those who stood out from the crowd, hence my selection of him.

I could provide examples of the direct links and the basis of my opinion if anyone is particularly interested, but I'd end up waffling on for ages, hence the omission from this post! ;)

On the issue of handling time in ANNs...

Feed forward networks are very poor at handling time. Even when you provide inputs covering information at previous time, which is essentially an attempt to model the autocorrelation of the process. However, there ARE network architectures that handle time very well... they're just harder to train, because you now have the problem of ensuring that you're seeing all processes passing through a given point at a given time.

Recurrent networks can be designed to model the underlying time-space differential of the process. You can even ensure properties such as stable (non-divergent) learning. I've made some particular contributions in this area in the application of recurrent architectures to learning control problems (where you know nothing of the system you are trying to control, only the performance requirements). Having said that, I certainly wouldn't advise anyone to apply these architectures to control problems in games.

Cheers,

Timkin

#66 Rockoon1   Members   -  Reputation: 104

Like
0Likes
Like

Posted 31 July 2008 - 09:59 PM

Quote:
Original post by InnocuousFox
First, in order to read non-current data (e.g. for detecting trends), you would have to assign an input to "what came before".


I think you are missing the point of machine learning.

You really shouldn't assign any sort of historic input. Instead, you can let the machine figure out whats important to "remember", when to "forget", and so forth. You can simply continue to give it current state information, and access to the feedback nodes. The more feedback nodes you give it, the more historic state it has.

The historic state, the feedback, has no assigned meaning to you. Thats for the GA to optimize.

Quote:
Original post by InnocuousFox
More to the point, however, NNs are good for creating an output based on the current state of the inputs.


True.

Quote:
Original post by InnocuousFox
You can't as easily start playing with things such as a Minimax-style problem or something as complex as a plan of actions in a sequence.


NN's are just numeric functions. It does what functions do. Of course it isnt suitable for searching a game tree. Thats what search algorithms are for.


#67 IADaveMark   Moderators   -  Reputation: 2535

Like
0Likes
Like

Posted 01 August 2008 - 02:31 AM

... which is why NN's are more "silver spoon" than "silver bullet". It's a big ol' mathematical function - not a problem-solving algorithm.
Dave Mark - President and Lead Designer of Intrinsic Algorithm LLC

Professional consultant on game AI, mathematical modeling, simulation modeling
Co-advisor of the GDC AI Summit
Co-founder of the AI Game Programmers Guild
Author of the book, Behavioral Mathematics for Game AI

Blogs I write:
IA News - What's happening at IA | IA on AI - AI news and notes | Post-Play'em - Observations on AI of games I play

"Reducing the world to mathematical equations!"

#68 Álvaro   Crossbones+   -  Reputation: 13936

Like
0Likes
Like

Posted 01 August 2008 - 04:35 AM

When I hear "AI" I immediately think "expected utility maximization", not "artificial neural networks".

Not all of AI is expected utility maximization, but a huge chunk of it is. It takes on many different shapes (e.g., game tree search) and it takes a lot of effort to get this general idea to solve real problems, so in that sense it's not a silver bullet either. But it is probably the first thing you should think of when approaching any decision-making problem.

Unfortunately, ANNs have a much catchier name and most people think of them first.



#69 Rockoon1   Members   -  Reputation: 104

Like
0Likes
Like

Posted 01 August 2008 - 08:53 AM

Quote:
Original post by InnocuousFox
... which is why NN's are more "silver spoon" than "silver bullet". It's a big ol' mathematical function - not a problem-solving algorithm.


Problem-solving is another name for the perhaps overly generalized term, Search Algorithms (be it AB, MTD(f), A*, etc..)

Chess is a fine example where, while dominated by the traditional searches, Machine Learning has also played an important role.

What is the value of a Knight or Bishop sitting on D6?

AB-driven engines have an eval() that needs to know the values as they relate to its own run-time search capabilities. Many engines use piece-square tables, a set of many hundreds of coefficients which are tweaked not by humans, but instead by machine learning algorithms. Not only are these coefficients too big a problem for a human to manualy define due to the sheer number of them, the human also isnt well suited for the task because he or she is not truely capable of groking the intricacies of how these values relate to an N-ply AB search nor the horizon effects associated with that.

The ML algorithms are very powerful tools which are typically not very useful as part of the end-product, but can be very usefull on your way to creating that end-product.


#70 EmptyVoid   Banned   -  Reputation: 99

Like
0Likes
Like

Posted 01 August 2008 - 09:03 AM

Can't be done with current technology but good luck trying. =/

#71 Daerax   Members   -  Reputation: 1207

Like
0Likes
Like

Posted 01 August 2008 - 09:13 AM

Quote:
Original post by Timkin
Some responses to recent comments...

Original post by Daerax
Quote:

I do not agree with that. Perhaps there would have been stalls and things might have ended up with slightly different notations like in say complexity theory, perhaps a different way of doing probabilities but I am certain that no one man has had so much impact since Aristotle. There were simply too many people approaching the notion of computing from many angles including off the top of my head: Russell, Church, Haskell Curry, McCarthy, Turing, von neuman, John backus...


Most of what arose in western engineering (particularly telecommunications and control) and subsequently computing from the 40s onward was based directly on the understanding of stochastic processes developed by the Russian-Germanic alliance of the late 19th and early 20th century. Generally speaking, western scientists and mathematicians were simply no where near the level needed to create this understanding. There is countless evidence of advances in western engineering and computing being directly based on Russian publications, or on Western scientists having spent time with their foreign counterparts, bringing back the knowledge with them.

During the latter half of the 19th century and into the 20th, there is a single strong thread of Russian mathematicians, predominantly coming from the same school at Moscow State University. The mathematics group there was pivotal to the developments of the time. Everything that came later in this area can be shown to have grown from the knowledge developed by this one group. Kolmogorov was one of those who stood out from the crowd, hence my selection of him.

I could provide examples of the direct links and the basis of my opinion if anyone is particularly interested, but I'd end up waffling on for ages, hence the omission from this post! ;)

Cheers,

Timkin


I do not not want to pollute this thread though it seems to have run a course. I do quite enjoy the history of mathematics.

I am not disagreeing that Kolmogorov played an important role, mainly that his role was pivotal. Things would have happened anyway since there was so much going on. In the area of control and telecommunications there is no doubt his role was greater but not as directly into Computing. With the exceptions of his contributions to intuitionistic logic, a branch of constructivism which people are just now noticing computer science is basically a branch of I do not see much direct influence. The current computer science is a child of early 20th century obsessoins with formalism and rigour propelled by Weiserstrass an overeaction to the poor proofs of the time. Type theory so important now - including for verification of hardware, software and for automated provers - is older than the turing machine.

In terms of giants Turing (british), Shannon (american) and Von Neuman (hungary) have most influence modern computer hardware and design. In software the legacy starts with Peano (italy) and Frege (germany). The names and location attributions are of the top of my head so i cannot gurantee spellings and exact locations.

I would be interested in the sources which state that much of modern computer technology is based in any one place.

#72 Timkin   Members   -  Reputation: 864

Like
0Likes
Like

Posted 04 August 2008 - 04:27 PM

Quote:
Original post by Daerax
I am not disagreeing that Kolmogorov played an important role, mainly that his role was pivotal.

Perhaps I over-stated my original point. I was not trying to suggest Kolmogorov was the only person responsible... but that he was a key link in a chain that, had it not developed and had it not been applied, we would not be where we are today. I was trying to make the original point that Markov's work did not sit idle for 100 years.

A particular example of the importance of Kolmogorov in the chain...

Shannon (whom we would probably all agree has played a key role in the development of modern information theory and subsequently the modern computer and telecommunications industry) directly indicates that his mathematical theory of communication (published in his pivotal paper of the same name) is directly inspired by the work of Norbert Wiener (check out the acknowledgements in his pivotal 1948 paper). Wiener, of course, is famous for many things, but many will know his name from the Wiener filter and Wiener process (also known as Brownian motion). While a Wiener process is an example of a Levy process, it is actually a simple case of the more general differential Chapman-Kolmogorov equation, which describes the general class of drift-diffusion stochastic processes. The Wiener filter is the generalised filter for Wiener processes. Many would know the specialised solution for linear, discrete time Wiener processes, being the Kalman filter.

While it is acknowledged that both Chapman and Kolmogorov developed their solutions independently, both build on Kolmogorov's work on reversible Markov processes, which are recognised as pivotal to the whole field of Markov processes, particularly in our modern usage. Without Kolmogorov's extensions to Markov's original work, we would not be where we are today.

I also do not agree that 'someone else' would have developed this understanding simply because there were others working in the field. It is definitely the case that more than one person is responsible for where we are today... but as with many fields of endeavour, they can be traced back to a handful of key discoveries/breakthroughs that were necessary before we could move forward.

Cheers,

Timkin

#73 brent_w   Members   -  Reputation: 100

Like
0Likes
Like

Posted 05 August 2008 - 08:15 AM

Quote:
1) single-player "campaign" type games are very scripted environments. A very small subset of the rules design wants for the game I'm working on now:
- when the player destroys this Item, the AI will counter-attack.
- When the AI squad gets to 20% of its original units -> retreat.
- AI should only throw grenades when no other AI is throwing a grenade and has not for at least 10 seconds.
- Only 1-3 AI should ever rush the player at any given time
- AI should only spend 30% of their time hidden behind cover; further no single AI should remain hidden in cover for more than 3 seconds
- When the player's units get to a Point of Interest they should stop and play a canned set of animations at a specific point to develop the plot.

Personally I see this as one of the great failures of the state of game design today.

Players are tired of scripting, it holds up a sign which reads "BORING AND PREDICTABLE" and they can spot it a mile away.


#74 sion5   Members   -  Reputation: 100

Like
0Likes
Like

Posted 05 August 2008 - 10:44 PM

Does anyone have a small game that has AI based on the "conventional" way that would be willing to let me write my own AI controller using NN's? Obviously you will take all credit for the game, all im looking to achieve is a set of results to compare and evaluate for my project.

#75 kirkd   Members   -  Reputation: 505

Like
0Likes
Like

Posted 06 August 2008 - 03:36 AM

I don't know if it is helpful or not but I have a PacMan clone on SourceForge that you're welcome to use. The AI is based on NEAT (Ken Stanley's Neuro-Evolution through Augmenting Topologies) so essentially it exists as a GA-NN already. 8^( You're welcome to use it as a platform if you like, though.

http://sourceforge.net/projects/aipac/

-Kirk

PS - If you use it, please keep me updated on the results. You can send me a PM here or we can swap e-mail.




#76 sion5   Members   -  Reputation: 100

Like
0Likes
Like

Posted 06 August 2008 - 04:59 AM

For some reason its not allowing me to run. It halts about 30 seconds from running. The app looks well made I would really like to see it working :-)

#77 ibebrett   Members   -  Reputation: 205

Like
0Likes
Like

Posted 06 August 2008 - 05:21 AM

Quote:
Original post by sion5
Does anyone have a small game that has AI based on the "conventional" way that would be willing to let me write my own AI controller using NN's? Obviously you will take all credit for the game, all im looking to achieve is a set of results to compare and evaluate for my project.


Robocode. It was practically made for this. Sorry, I haven't read the rest of the thread so I don't know if someone has mentioned it.

#78 kirkd   Members   -  Reputation: 505

Like
0Likes
Like

Posted 06 August 2008 - 05:43 AM

Read the README file, below. Most problems are solved there.

The stall you see is due to the GA running in the background. Essentially the visualization only shows the best PacMan for the current generation, and then the GA runs. For each member of the population, it has to run a simulation of the game which takes some time even in the absence of graphical updates and with some extra optimizations.

-Kirk

Quote:

AIPac
Robert Kirk DeLisle
12 Feb 2008

--------------
Plain English:
--------------

The Big Picture
---------------

A neural net controls PacMan's movements through the maze and you can control the types of inputs going into the PacMan. The default version associated with the executable has no ghosts and uses a Window around PacMan with a radius of 4 tiles. This results in four tiles in each direction around him, plus the two tiles in each direction that he occupies, or 4+4+2 (4 to the right, 4 to the left, 2 for PacMan) left/right and up/down. This gives 100 total tiles. I also have the tiles that PacMan covers removed from this set, so you get 96 inputs.

Each tile can have 3 possible inputs - 1 for ghosts (-1 if a baddie, 0 if no ghost there, and 1 if a blue one), 1 for a dot/wall (-1 if there's a wall, 0 if nothing there, and 1 if a dot is there), and 1 for a pellet (0 for no pellet, 1 for a pellet). The default version you have has no ghosts (you can control the number of ghosts in the input file), so you have dots/walls and pellets as inputs for each tile. The grand total is now 96 * 2 or 192 inputs.

At each step that PacMan takes, the neural network is evaluated. The net has 4 outputs, one for each direction (up, down, left, right). The output with the highest value gives the direction that PacMan takes.

In the absence of ghosts, the game map is deterministic so yes, PacMan will take the same path when using the same neural network. The ghosts have a strong random component to their movements, so adding these will cause some random variation in PacMan's movement from run to run.

The Neural Nets are evolved using an Evolutionary Computation process. The fitness I'm using is the score that PacMan gets for one run through the maze. The time one run can take is limited (you can also control this in the parameters file). The goal is to maximize the score that PacMan gets.

I've found it most interesting to look at PacMan's navigation early in the process and compare that to what happens after it has run for a while. No surprise that early in the process PacMan's movements are more or less random - often he just drifts one direction or the other, hits a wall, and stops. Then after it has run for a while and his score is over 1000, his behavior is much more interesting. So far the maximum score I've seen (with no ghosts) is about 2600 out of a maximum of 2840.


How to get PacMan to run through the maze
-----------------------------------------

Yes, you can opt to have the PacMan run after each generation using the best neural net thus far. To get this to work, while one of the application's windows is activated (the PacMan game maze is probably the best), press the 'B' key. After the current evolutionary step, PacMan will become active. You can turn it off - go back to having PacMan not run - by pressing the 'B' key again.

BE PATIENT after you press the 'B' key as the evolutionary step will have to complete before PacMan runs and sometimes this will take a while depending on how many inputs you have and how large the population is. Also, if you get impatient and hit the 'B' key a second time, you will end up toggling the display off so you'll be back where you started. Just hit the 'B' key once.



----------------
Original README:
----------------
AIPac was developed in order to examine (play with) evolutionary training of a PacMan controller. I've tried to develop the simulator in as open a form as possible in order to allow modifications to the maze, the graphics, etc. I may no warranty on the quality of the code. I refer to AIPac and PMAI somewhat interchangably, so be forewarned.

The files/distributions are as follows:

AIPac_README.txt - this file


AIPac_win32exe.zip:
This contains just an exectuable with the minimum files necessary to get it to work. The files are:
PMAI.exe - the executable
AIPac.param - a parameters file within which you can control the behavior of the executable. I've tried to make it self-explanatory.
Resources/ - this directory contains the graphics and mazes I used for the simulator. It should be fairly flexible to allow changes, but it may not be intuitive.


AIPac_code_win32.zip
This contains all the code used to develop the application along with the Resources listed above. I used Code::Blocks as my IDE, so I retained the project as-is. My version of Code::Blocks is 4639 which is current as of late 2007. I also used GCC 3.4 to compile the code. I'm certain you can use other IDEs (such as MSVC++), but I've opted for open source.
NEAT/ - This directory contains the code for evolutionary control of neural networks. NEAT is Neuro-Evolution through Augmenting Topologies, developed by Dr. Kenneth Stanley. (http://www.cs.ucf.edu/~kstanley/) This version of NEAT was developed by Mat Buckland of AI-Junkie fame (www.ai-junkie.com) and he has give me permission to include it with my distribution. (Thanks, fup!) I have used his code with minimal modifications, and as a result, it is a bit of a hack, especially CController.cpp file.
Resources/ - same as above. The graphics and maze maps necessary for the executable.
Utilities/ - my collection of programming utility classes used in this project. I tend to use these a lot in my projects.


PM_NoGhosts_9600Gen_2270.wmv
This file shows the best results I've gotten thus far (late March 2007) with no ghosts in the maze, using a windowing method for inputs. In this input method, the neural network inputs consist of a window centered on PacMan and extending 6 tiles in each direction. This gives a 13x13 window of the game board as input. There are 4 outputs, one for each direction. The largest output determines the direction of PacMan's movement.


-------------------
Additional Details:
-------------------

Here are some additional details related to AIPac that might come in useful. I've tried to put the details in the order of interest and those pieces that will get you up and running the fastest.

Remember that the Resource directory needs to be present. This has all the graphics, mazes, etc. needed for the simulator. You can control most (all, I think) of the details of where the various files are found within the parameters files, AIPac.param, but I recommend leaving things as they are until you're familiar with the program.

While the simulator is running, if you press the 'B' key, this will toggle displaying the current best PacMan controller after each generation. Remember that there will be a delay in displaying the best controller due to the fact that generations take a while to process.

Currently, I have the parameters set up for a rather lengthy evolution step. If you want to see some action more quickly, try changing the following parameters in AIPac.param:

Replicates - Typically I have this set to 5 which means that each PacMan controller will be tested 5 times and a measure of performance assessed across the multiples. Set this to a smaller number and things will run faster. The purpose of this is to account for the random nature of the ghosts. If the ghosts are not present, set this to 1 since that type of simulation is deterministic.

PopSize - Obviously, reducing population size will make things faster, although potentially less interesting.

AIType - I have this set to Windowed which defines a window around PacMan, as described in the readme file. Setting this to Global will be a very different type of AI. Specifically, the inputs defined in the section following AIType setting will be used rather than a window around PacMan. This will shrink the number of inputs significantly, but I've found it to be less interesting.

I've tried to be as descriptive as possible in the AIPac.param file. It should be largely self-explanatory.


If there are other questions or comments, please leave a message in the forum.


---------------
Update Details:
---------------


v0.1.5 updates
--------------

Vector-based inputs implemented. Walls are input as a window. Pellet inputs provide a distance and angle to each power pellet, ghost inputs provide distance, angle, and state (-1 = hunting, 1 = blue) to each ghost, dot inputs show distance and angle to the centroid of all remaining dots. All vectors assume PacMan as the origin.

EnforceIntersections was added to the control parameters. If set to 1, PM will only make a directional change at an intersection. If set to 0, PM can change direction at will. There's a bit of a speed up with this as well as enforcing smooth paths.

**Verify proper scoring for dots - the possible maximum score should be 284 dots * 10. (Ghosts and Pellets seem to be OK.)**
I found that at certain intersections mulitple dots are removed simultaneously due to a perceived overlap of the bounding box and dot tiles, but only one produced a score. A modification was made to count how many dots are removed for any particular PacMan location and the score adjusted appropriately. MapController::ModifyTileByPixel

The visualization delay was linked to the parameters file. Larger numbers cause the visualization to run more slowly so that it is easier to watch. This does not affect the actual GA portion, just the visual of PacMan running through the maze. The same delay is linked to the playable mode option.

A user playable option is now available. The parameter Mode can be set to Evolve to run an evolutionary simulation, or to Play for a Keyboard playable version. Use the arrow keys to move PacMan around the maze. All the other options remain the same.

**Fix the gate on the ghost cage - it doesn't redisplay and it may not be reset.**
A forced redraw of the entire maze was implemented any time the gate is opened or closed. This does not appear to slow the process whatsoever and in fact I can force a redraw at every frame without any noticable performance losses. This is a much easier fix than to implement specific drawing and erasing of the gate, or modification of the maze graphics.



v0.1.4 Updates
--------------

A few bugs were fixed. Specifically, some issues with whether PowerPellets and Ghosts were visible or not were addressed.

The ability to ignore walls or dots was added to the Windowed AI. In the parameters file the following keywords control this as follows:

DotWallSingleInput - If set to 1, the input will be -1 for a wall, 0 if empty, and 1 for a dot. If set to 0, there will be two sets of inputs - one for dots one for walls.

DotsWallsOnly - If set to 0, both dots and walls will be used for inputs. If set to -1, only the walls will be seen. If set to 1, only the dots will be seen.



v0.1.3 UPDATES:
---------------

Windows now update during the evolution step thus preventing an freezing of the application during one epoch.

The console window now tells you whether PacMan will run or not after the current epoch and has (almost) instant feedback from pressing 'B'.

Small optimizations to speed up the simulation.

Ghosts now allowed to reevaluate their direction after PacMan eats PowerPellet. Hopefully this will discourage chance collisions forcing PacMan to be more active in pursuit.

When the console window was running a spontaneous crash would occur the seemed to stem from the Windows message queue. A DoEvents() function was implemented to allow window events to be processed while the Console was active. This seems to have solved the problem.




#79 sion5   Members   -  Reputation: 100

Like
0Likes
Like

Posted 06 August 2008 - 07:20 PM

ibebrett - That Robocode could have proven very useful. The only problem is, its Java. Im sure Java isn't difficult to learn, but as its a final year project I would rather spend the time more wisely researching- as opposed to learning a new language.

#80 Barking_Mad   Banned   -  Reputation: 148

Like
0Likes
Like

Posted 06 August 2008 - 10:40 PM

Heh i know nothing of ANNs but i have to laugh at the irony of the apparent cognitive bias of the OP towards the use of ANNs, given the subject you would expect at least an open mind and bias free origin.

Reading this thread was enjoyable and that realisation made me chuckle. No disrespect intended to OP.




Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS