What Would You Need for a Solar System Simulation?

Started by
18 comments, last by Purgatorio 12 years, 7 months ago

Well the more you simplify things down, the further away you go from a "Solar System Simulation" which is the topic of your thread. If you really want to do it for the experience, why don't you simply try it? Even if you fail you are likely to learn a good bit in the process and you should gain a better understanding of the iPad processing limitations.


I'm not looking to program anything personally. And a simulation doesn't necessitate an exact replica, at least that wasn't my ultimate goal; I didn't actually think the examples I gave would be this intensive of a program. I see explosions that affect objects in games all of the time, I was merely wondering if you could break down the same elements and principals into a simplistic idea of planets and stars, rather than guns, ammo, bombs, buildings, etc. Like a proof of concept for technology/programming.

The general consensus seems to be that this isn't practical, at least as it has been described. The concept was to present an empty amount of space with raw resource/material that could be used to create other entities. I'd probably just be typing at myself at this point to give further explanation. Perhaps I can revisit this at a later time.

Thanks.
Robert Ortiz - Writer & Game Developer
Advertisement
I have a solar system example in my blog vasilydev.blogspot.com it comes with sources so you have to open the folder bin/debug It can be added anything you asked for.

Well the more you simplify things down, the further away you go from a "Solar System Simulation" which is the topic of your thread. If you really want to do it for the experience, why don't you simply try it? Even if you fail you are likely to learn a good bit in the process and you should gain a better understanding of the iPad processing limitations.


I feel like you could optimize the problem a bit using some sort of flow graph similar to fluid dynamics without losing that much accuracy at all. Then it would just be an O(n*m) problem where n is the number of particles and m is the resolution of your graph. I actually think you could get it pretty precise.

This is a fun problem. I'm going to try to think of some other groovy stuff you might be able to do. It seems like there should be some way to get the n-body problem down to near constant time if we consider it more like n body's interacting with one space-time object and vice versa. It seems like you could boil it down to the sum of the equations for every body's influence then reapplied to each body. Still have to think about if it's even possible to implement that as trivially as it sounds (edit: I feel like the most trivial way would be what I said earlier, but I'm bored and I want to think about it more)

It sounds like you want every particle to interact with all the other particles, with everything updating in real time. That's an all-to-all comparison. You can optimize it into blocks, making it an all-to-many comparison. Either way, it is a polynomial growth algorithm. It is similar to the blue line in this wikipedia image, only the value will be much bigger than 3. But the x^3 is a good starting point for a simple example. Note how in that simple example, just ten items would result in 1000 compares. Having 100,000 particles in a naive all-to-all simulation would result in over 1 quadrillion (1E16) comparisons.


It's doable in real-time, but the method to do so requires understanding of astrophysics and some fairly specialized programming skills. The algorithms, if chosen and implemented properly are O(n) and can be tweaked to have very low constant factors. For a toy-like simulation which doesn't need accuracy it's quite possible to hit memory bandwidth before CPU becomes an issue.

But it's doable, I used to toy around with problems like this, but a native application is too difficult to distribute (aka monetize) and various managed platforms aren't capable of doing it.

But it's simply not something a non-programmer can do.

It's doable in real-time, but the method to do so requires understanding of astrophysics and some fairly specialized programming skills. The algorithms, if chosen and implemented properly are O(n) and can be tweaked to have very low constant factors. For a toy-like simulation which doesn't need accuracy it's quite possible to hit memory bandwidth before CPU becomes an issue.


Care to share so I don't waste my afternoon on stuff people have done already? <3

It's doable in real-time, but the method to do so requires understanding of astrophysics and some fairly specialized programming skills. The algorithms, if chosen and implemented properly are O(n) and can be tweaked to have very low constant factors. For a toy-like simulation which doesn't need accuracy it's quite possible to hit memory bandwidth before CPU becomes an issue.


Out of curiosity, how do you turn the all-to-many comparisons into an O(n) problem? By definition all-to-all or all-to-many problems are complexity O(n[sup]x[/sup]).

You can reduce the value of x to a low number through spatial tricks, you can make one pass to block out the space into zones, but even then you still must do a pass at every node with every block, which still stays polynomial with x>1.

Out of curiosity, how do you turn the all-to-many comparisons into an O(n) problem? By definition all-to-all or all-to-many problems are complexity O(n[sup]x[/sup]).

Barnes Hut (or variation of tree code) or Fast Multipole Method (or whatever multipole representation you choose). They are both approximations by definition but have been shown to work well in practice, even for actual astrophysical simulation. There is also a rant from Barnes on how CS experts could never come up with such clever optimization...

BH is nlogn. FMM is O(28n). Now you can solve nlogN = 28N and get that, for practical purposes (CS academia would cry foul) you can assume that both are O(n), but for less than 2^28 elements (billion particles, ~4-8GB of memory, vastly beyond reasonable non-clustered problems), BH has lower constant factor. Tree construction of BH can be performed what is essentially a radix sort, which is O(n) and benefits greatly from Morton ordering. FMM uses implicit grid.

BH is branch and bound, runs well on CPU, easy to parallelize, benefits from uneven distribution. FMM prefers even distributions, but is best suited for GPU. BH variations using kd-tree instead of quad or oct-tree exist, but do not benefit much from GPU.

Rendering is a matter of available hardware.


As said, it's doable. But either method will use all the computing power you throw at it, so iPad or iPhone are not ideal targets, but likely are doable for small n.

For trivial cosmological simulation, create n particles (of same or slightly different mass), vary initial velocity a bit and let it run. Use the information provided by methods above (meta nodes in BH or coarse grid of FMM) to resolve collisions. Tweak the algorithm a bit to compensate for very large objects.

And you get a rudimentary simulation of solar system evolution. n=1,000,000 is perfectly doable on a mid-range PC, especially if you take into account numeric ranges and implement everything using integers (initial particle mass = 1, sum(particles) < 2^64). Hence the "specialized programming knowledge", you need to aggressively optimize at expense of accuracy (works for toy problems). Simulating 2D only reduces constant factors further.

A possible tweak would be to run multi-level simulation. "gas" particles are simulated by one simulation (perhaps even fluid solver) while "things that clumped" are simulated by another, making collision between tangible objects simpler.

For accurate simulations, CPU does become a problem, since individual interactions need to be resolved within a certain error, but those run on clusters for days, so it's a different problem.

[quote name='frob' timestamp='1314035318' post='4852413']
Out of curiosity, how do you turn the all-to-many comparisons into an O(n) problem? By definition all-to-all or all-to-many problems are complexity O(n[sup]x[/sup]).

Barnes Hut (or variation of tree code) or Fast Multipole Method (or whatever multipole representation you choose). They are both approximations by definition but have been shown to work well in practice, even for actual astrophysical simulation. There is also a rant from Barnes on how CS experts could never come up with such clever optimization...
[/quote]


Thanks, that's some interesting reading.

Follow up papers from those show the error is about 0.1% per iteration for 10,000 particles with near-uniform distribution, a little more for non-uniform distributions. That's horrible error for scientific purposes that need to run for for an extended period, but reasonable for a toy app like this.

The constant part is quite large and it jumps around in data reads, so I still doubt it would be feasible in real time on such a small cache processor as the iPad for this scale of particles.


Follow up papers from those show the error is about 0.1% per iteration for 10,000 particles with near-uniform distribution, a little more for non-uniform distributions. That's horrible error for scientific purposes that need to run for for an extended period, but reasonable for a toy app like this.

It depends. 0.1 may lead to evolution of Universe B instead of Universe A. But how do you know which one is "correct"? Even with full computation you still have rounding errors.

For purpose listed above, one important criteria is stability of orbits and that can be controlled in different ways. Other effects may matter less.

The constant part is quite large and it jumps around in data reads, so I still doubt it would be feasible in real time on such a small cache processor as the iPad for this scale of particles.
[/quote]

iPad is a bit on the low side, but as said, algorithms scale really well.

But I still don't know of any engine that would be suitable for such task out-of-box. The physics would need to be not only tailored for the problem, but also squeezed small enough to fit into mobile devices.

There is a lot of potential uses for touch/gesture interfaces, if only they had the power of PC gaming rigs. Then we could do some completely crazy stuff.
Things wouldn't need to necessarily occur in real-time. There could be a brief cinematic to show the effects and when it ends, you're left with the results of the event. Not as awesome, of course, but again -- this didn't need to be an exact, real-time replica. Just simulate likely outcomes and processes. And by simulate I mean produce -- nothing happens without the player's direct or indirect involvement.

Everything that's been said on this topic has been way over my head.
Robert Ortiz - Writer & Game Developer

This topic is closed to new replies.

Advertisement