Realistic max for interactive Cellular Autonoma

Started by
5 comments, last by Numsgil 17 years, 5 months ago
I'm trying to create a large ecosystem simulator (linky). Animals will probably be free roaming agents while the ground, water, atmosphere, and plants are represented by different cellular autonoma grids. Basically the heightmap is one CA, dirt quality another, air currents another, etc. Since it's at heart an ecosystem, things need to be as large as possible to allow for population sizes to be large enough to allow for stable population dynamcis. Assuming that I pick a resolution of 1 square meter for each CA cell, and assuming that any global interactions (erosion, etc.) are done with local information only (like a guassian blur), what is a realistic max for the dimensions of my world? Most global interactions don't need to be performed more than once every few seconds, so my gut instinct is that the CA isn't going to be CPU limited as much as RAM limited. I'm thinking that most CA layers are maybe 1 or 2 bytes per cell. I'll probably have maybe 20 or 30 layers, so that's something like 40 or 50 bytes for each grid cell. Having to page from Virtual Memory might be the kiss of death for something like this, but I'm not sure. I'm thinking on a reasonable computer (512 megs ram, 2Ghz processor) you could probably max out at a 3000 x 3000 sized CA with the properties I outlined above. Just looking for people's thoughts and/or experience working with Cellular Autonoma. What's the realistic ceiling for something like this?
[size=2]Darwinbots - [size=2]Artificial life simulation
Advertisement
Hi Numsgil,
i'm afraid i can't be of any assistance on this problem but as i read your post i immediately thought that this would be the ideal problem for computing on the graphics card itself (check gpgpu.org, i think i might have seen a simple cellular automata simulation on this site that 's running entirely on the gpu).
if you're going for a big population in order to achieve stable effects this might be the way to go. if you could somehow page different regions of your landscape in and out of the gpu memory .. you would probably get the best results performance wise and you could deal with a huge population, too.
just my quick thoughts on this. i don't have any practical experience with cellular automata simultion, though..
You may want to look at the theory behind "hashlife", and implementation of The Game Of Life,
where in you can make an infinatly extensible field where self-similarity is used to keep both compute and memory usage down.


To hold down the memory size, you may want to split the objects up between
the cellular data (which you want to minimize the data size for to fit the largest array possible) and a sparse array of fewer 'significant' objects.

Your terrain data (soil factors, water content, minor flora, etc...) would be in the static grid, and animals, larger plants(, etc..) objects in a sparse array. Your higher objects will only be in a fraction of the cells and probably each have alot more data (which you dont need to reserve space for in each cell).

You will need a way to link the sparse array 'significant' objects to their grid location for when you do proximity checks when simulating object-to-object interactions.
Either a link list for each cell (a list of all objects in the cell) OR
a space partitioning system (super grid or BSP tree system) to control subsets
which can be searched to get a list of nearby objects.

In both cases the moveable objects will have XY coordinates as part of their data.


By using sparse arrays for the more complex data and minimizing the grid cell data ypu will be able to have a much larger map at a cost of a bit more CPU processing (to identify adjacents).

Im doing something like this on a simulation project -- I have heavy weight AI
as a target, so the 'significant' objects each often have many thousands of times the data of one map cell.



Memory is cheap. You machine spec probably allows use of the discount priced RAM.

You could probably up your target machines mem to 2GB for a reeasonable price.



Bitmapping data is your friend. If you can use a bit or two instead of a byte for many of the factors (and group all the cell factors into one record/struct to maximize the potential compression -- and turn off packing...)
you could save alot of space (assuming you werent already planning to do this...)
Quote:Original post by Anonymous Poster
...By using sparse arrays for the more complex data and minimizing the grid cell data ypu will be able to have a much larger map at a cost of a bit more CPU processing (to identify adjacents).

Im doing something like this on a simulation project -- I have heavy weight AI
as a target, so the 'significant' objects each often have many thousands of times the data of one map cell.


That's more or less what I was thinking. Nice to know the same conclusion has been reached by another person independantly.

Quote:
Bitmapping data is your friend. If you can use a bit or two instead of a byte for many of the factors (and group all the cell factors into one record/struct to maximize the potential compression -- and turn off packing...)
you could save alot of space (assuming you werent already planning to do this...)


I was thinking along these lines, but most of my data needs at least a full byte to represent an appropriate number of values to give the diversity I'd like. Either way, I'm aware of bitmapping, so I'll keep it in mind as I work on this.

Thanks for the thoughts everyone.
[size=2]Darwinbots - [size=2]Artificial life simulation

This topic is closed to new replies.

Advertisement