Embassy of Time

Gravity systems?

37 posts in this topic

Posted (edited)

As some may know, I am working on a game that involves a lot of scientific simulation to do what I consider to be -real- procedural generation. I am currently going through the different sciences in my blog, but having closed the book on my last version, I am now facing the move into far more detail on the 'science' part of things. And that is forcing me to study up on a lot of stuff!

I know how complex gravity systems work, and I have the theory for how to design a universe that unfolds from a fake Big Bang and then spreads out, gathers into stars and galaxies, and eventually has planets in it and so forth. However, I have very little experience in turning that into actual code. While I could just sit down and do a simulation of every bit of gravity pull and such, I can't help but wonder if someone has found easier (and less CPU intensive!) ways of doing it?? Right now, the plan is to have random "gravity centers" in a point cloud gather the contents of the universe around them as they all move outward and affect each other, but this will likely be by checking gravity pulling on every bit of content at the rate of one check per few miliseconds. It feels like there should be an easier way, maybe a set of (all things considered) simple equations that can pinpoint where everything is at a given point in time, or something like it. I honestly don't know, I have never gone into this much detail with my star / uuniverse simulations before.

So, does anyone know some good material on simulating star clusters pulling on each other as they go through space? Anything is appreciated, no idea should be tossed aside!

 

Edit: By popular request, an elaboration: I am not trying to accurately simulate our universe. I am making a game that I want to procedurally generate a universe according to realistic, by HIGHLY simplified scientific methods.

Edited by Embassy of Time
0

Share this post


Link to post
Share on other sites

just look up how its done in the real world and do the best you can with the hardware you have.

 

odds are it will pretty processor intensive - IE particle fluid simulation type stuff.

0

Share this post


Link to post
Share on other sites

Posted (edited)

It feels like there should be an easier way, maybe a set of (all things considered) simple equations that can pinpoint where everything is at a given point in time, or something like it.

Being able to predict a trajectory in the future implies you know all relevant forces between now and that point in the future.

Then (at least in theory) you could compute the trajectory, and thus the position at time now+something.

I have no experience in what you're trying to do, but I think the above basically ends with anything beyond a fixed number of fixed forces (not varying in position or strength). Since it seems you're way beyond that, incremental step-wise computing seems the best approach to me.

In reducing computation effort, you may want to prune non-relevant forces from the calculation (assuming you'll have heaps of small distant rocks that basically do nothing relevant).

One trick you may want to do to increase precision, is to start with summing the smaller forces first. That way they get a chance to accumulate to a relevant number before being chopped off by bigger forces. There are likely lots of other numeric tricks that apply, but that's an area I'd like to explore too, one day :)

Edited by Alberth
0

Share this post


Link to post
Share on other sites
This is a huge area of study. You can easily make a PhD thesis about this, and many people have.

So you have N particles that interact through gravity. You would normally model them as spheres of uniform density and fixed radius, so when two of them get really close things don't blow up.

You then need to do two things per time step:
(1) Compute the net force on each particle.
(2) Integrate the equations of motion.

(1) is the expensive part. Naively, it takes time proportional to N^2. It turns out there is some very clever trick called the "multipole method" that is explained terribly in the papers I have found, but it can approximate (to very high precision) the forces really well using time O(N). I have only ever needed a 1-D version of this and it was quite challenging. I don't know how much harder it would be in 3D.
1

Share this post


Link to post
Share on other sites

Posted (edited)

You then need to do two things per time step:

yep, that's the basic algo for particle based fluid simulation. applications include aerodynamics, hydrodynamics, gravity simulations, weather simulations, perhaps even nuclear explosion simulations - have to look that last one up. 

As i said, rather processor intensive.  Supercomputers and parallel processing.

Edited by Norman Barrows
0

Share this post


Link to post
Share on other sites

 

It feels like there should be an easier way, maybe a set of (all things considered) simple equations that can pinpoint where everything is at a given point in time, or something like it.

Being able to predict a trajectory in the future implies you know all relevant forces between now and that point in the future.

Then (at least in theory) you could compute the trajectory, and thus the position at time now+something.

I have no experience in what you're trying to do, but I think the above basically ends with anything beyond a fixed number of fixed forces (not varying in position or strength). Since it seems you're way beyond that, incremental step-wise computing seems the best approach to me.

In reducing computation effort, you may want to prune non-relevant forces from the calculation (assuming you'll have heaps of small distant rocks that basically do nothing relevant).

One trick you may want to do to increase precision, is to start with summing the smaller forces first. That way they get a chance to accumulate to a relevant number before being chopped off by bigger forces. There are likely lots of other numeric tricks that apply, but that's an area I'd like to explore too, one day :)

 

That's basically the way I'm going now. I just get the weird sense that some clever person already condensed it down to lesser equations.

This is a huge area of study. You can easily make a PhD thesis about this, and many people have.

So you have N particles that interact through gravity. You would normally model them as spheres of uniform density and fixed radius, so when two of them get really close things don't blow up.

You then need to do two things per time step:
(1) Compute the net force on each particle.
(2) Integrate the equations of motion.

(1) is the expensive part. Naively, it takes time proportional to N^2. It turns out there is some very clever trick called the "multipole method" that is explained terribly in the papers I have found, but it can approximate (to very high precision) the forces really well using time O(N). I have only ever needed a 1-D version of this and it was quite challenging. I don't know how much harder it would be in 3D.

Got links to some of those PhDs?? Even if the advanced stuff is too, well, advanced, it might trigger some inspiration. Your (1) and (2) steps are what I am looking at doing now, but I have already googled up the multipole thing and hope there is gold in them there hills!

0

Share this post


Link to post
Share on other sites

Okay, since I work in Astrophysics (as one of those many PhDs doing this still ;-) ), I thought I should add my weight to this one.  Basically there's no way in hell you can model a full Universe, even with crude detail using actual physics.  I have colleagues who do these things for a living and to get even half-crude simulations they need to run for weeks/months on parallel super-computers usually with 100s or 1000s or processors.

 

However, what you can (and probably should) do is use the various models and mathematical prescriptions that describe the structure in the Universe to create a pseudo-model.  For example, the large-scale structure of the Universe is a network of filaments connecting large galaxy clusters which obey some sort of structure function.  Then at smaller scales, galaxy clusters are often modeled using simple radial profiles such as a King Profile.  Galaxies themselves either have spiral or ellptical profile, depending on various parameters such as the amount of gas, angular momentum of gas, the age of the stars, whether it has merged with another galaxy earlier and other factors.  And then of course the contents of galaxies can be parameterized statistically, such as Globular clusters, or the distribution of stars (masses and binarity) and of course the distribution of planets around stars!

 

But getting a pseudo-physical algorithm that can 'simulate' the formation of a full Universe like this is a tough ask in itself, even if you only use approximations rather than real physics.  I've not quite thought about it myself tbh but I'm sure there are ways to do it that will give you the kind of results you are hoping for.  Sorry this is more of a 'what you can't do' post than 'what you can do' but thought I should just point that out now before you spend/waste too much time on it and point you more in the direction of what is possible.  This is of course simply creating procedural models of the Universe rather than just planets but same principle ;-)

 

As a small aside, I'm sure you know about these but thought I'd link just in case since the techniques these guys use will probably overlap with what you might want to do :

http://universesandbox.com/

http://spaceengine.org

4

Share this post


Link to post
Share on other sites

Posted (edited)

Okay, since I work in Astrophysics (as one of those many PhDs doing this still ;-) ), I thought I should add my weight to this one.

Oh no, no no no, don't you worry, I am NOT simulating a full universe!!!! I teach (taught) science and study a bunch of sciences myself, and did a thorough study before all of this, and no, no, noooo I am not going for anything more than a laughably simplified universe using the most basic of scientific equations! At the moment, I am trying to put the first tiny pieces into action, so to speak, and I just want to avoid wasting time on a lot of overly complicated math if there is one or more shortcuts that can be taken. I just got basic gravity (Newton's law of universal gravitation) working, for example. I know the games / simulations you list, and I doubt that the parts of mine that simulates the cosmos will be even that advanced! All I want is something that can claim to be based in science, not something that is scientifically accurate. I know (some of) my limitations :o

Of course, now that you have 'outed' yourself as an astrophysicist, this debate might get a whole lot more interesting ;-) Incidentally, I can't find anything on a King Profile, do you have a good link explaining that?

Edited by Embassy of Time
1

Share this post


Link to post
Share on other sites

 

Okay, since I work in Astrophysics (as one of those many PhDs doing this still ;-) ), I thought I should add my weight to this one.

Oh no, no no no, don't you worry, I am NOT simulating a full universe!!!! I teach (taught) science and study a bunch of sciences myself, and did a thorough study before all of this, and no, no, noooo I am not going for anything more than a laughably simplified universe using the most basic of scientific equations!

 

Then you need to be much more clear about what you're trying to do, because I also thought you're trying to simulate a universe from creation to late-stage development.  My thoughts are exactly as sergamer1, there's no chance whatsoever of doing this even close to anything usable for a game with simulation times as low as a few milliseconds.

I think whatever you're trying to do, if you want to use somewhat realistic physics (no matter how simple the math) on galactic or universal scales, you will have to sub-divide things to a huge extent.  Or, better would be to limit the scope of what you're trying to simulate.  

1

Share this post


Link to post
Share on other sites

Then you need to be much more clear about what you're trying to do, because I also thought you're trying to simulate a universe from creation to late-stage development.  My thoughts are exactly as sergamer1, there's no chance whatsoever of doing this even close to anything usable for a game with simulation times as low as a few milliseconds.
I think whatever you're trying to do, if you want to use somewhat realistic physics (no matter how simple the math) on galactic or universal scales, you will have to sub-divide things to a huge extent.  Or, better would be to limit the scope of what you're trying to simulate.

I added an edit to make sure people get the details of my intentions. I am currently working on sub-division tactics, and part of their purpose is to make sure that only relevant parts of the universe get simulated. I did that before, too. Trust me, I know how big and complex the universe is (metaphorically speaking), this is not a "universe copy-paste", it's just a project trying to be based on realistic science.

0

Share this post


Link to post
Share on other sites

Oh no, no no no, don't you worry, I am NOT simulating a full universe!!!!  

 Yes, I didn't feel like you were planning that from your original post but the way the thread was going, it seemed like things were pointing that direction which I thought I should just point out right now is a very bad idea, unless you want to train to be an Astrophysicist yourself :-)

 

At the moment, I am trying to put the first tiny pieces into action, so to speak, and I just want to avoid wasting time on a lot of overly complicated math if there is one or more shortcuts that can be taken. I just got basic gravity (Newton's law of universal gravitation) working, for example. I know the games / simulations you list, and I doubt that the parts of mine that simulates the cosmos will be even that advanced! All I want is something that can claim to be based in science, not something that is scientifically accurate. I know (some of) my limitations

So, the Universe is a pretty big topic to discuss so where to begin :-) .  Newton's law of gravity is certainly useful for very basic motions, such as how planets (or moons or rocket ships) may orbit around stars (or other planets) but will be too expensive if trying to model larger systems with more than a dozen (or perhaps a couple of hundred) objects.  What you should be looking at is developing (or reading about) simple models that use simple mathematical equations to represent the gravitational potential field or simple combinations and then your physical objects (e.g. galaxies, stars, etc..) move as test particles (https://en.wikipedia.org/wiki/Test_particle) in these potential fields rather than computing the full insanely expensive gravitational.  Let's take a basic galaxy as an example.  It contains

  • A spherically symmetric halo of dark matter.  It doesn't do anything interesting really but holds the galaxy together gravitationally and can be approximated by a simple equation (e.g an NFW profile, https://en.wikipedia.org/wiki/Navarro%E2%80%93Frenk%E2%80%93White_profile)
  • A halo of old stars, many contained in very old massive clusters called globular clusters.  The globular clusters can be very easily modelled with something called a Plummer potential profile (https://en.wikipedia.org/wiki/Plummer_model)
  • The gas in the galaxy (often called the Interstellar medium or ISM) lies in the disc of the galaxy but often concentrated in spiral arms, as like the classic picture of galaxies  (There's no simple article for this sorry, but maybe this helps : https://en.wikipedia.org/wiki/Gravitationally_aligned_orbits)
  • The gas is compressed in these arms leading to new stars forming from the gas.  The density of gas and the formation of new stars has several empirical relations, like the Kennicutt-Schmidt relation, which allows you to approximately model how this happens (Sorry again, no simple article on this).  These stars then spread into the disc of the galaxy and move around in (approximately) circular orbits around a spiral potential.

And that's just to model a typical everyday spiral galaxy that you might find.  If you want to model how this forms (in a hand-wavy way) from the big bang, then you need to consider the large-scale picture like the formation of Galaxy clusters, how these protogalaxies merge and form the next generation of galaxies.  That's not quite my own field but I know enough to know it's possible to do in some nice approximate way like you are probably suggesting.  You could for example, model a Universe with various smaller galaxy seeds (but not too many unless you want the performance issues again).  There would need to be some underlying filament potential which describes the large scale structure of the Universe (e.g. https://phys.org/news/2014-04-cosmologists-cosmic-filaments-voids.html) and then your test particles will move towards them to mimic the natural formation of the large-scale structure.  

 

As you might guess, there is so much to go into this (and even I'm not 100% sure of all the steps) but I hope this little post has helped explain a possible path to modelling a Universe on the cheap!  ;-)

 

Of course, now that you have 'outed' yourself as an astrophysicist, this debate might get a whole lot more interesting ;-)

Haha, guilty as charged!   Never thought I was hiding though!  :P

 

Incidentally, I can't find anything on a King Profile, do you have a good link explaining that?

Hmmm, unfortunately some of these 'basic' things don't have nice pages for the layman like wiki.   There's a ton of astro articles, such as University pages, if you type 'King cluster profile' into google but not sure which one is good to recommend.  But there's hopefully enough info and links here to help for now!  ;-) 

0

Share this post


Link to post
Share on other sites

Posted (edited)

<all of it>

Yeeeesssss...... The knowledge must floooooww......

*cough* That's some good stuff, everything is already open in browser tabs and I will be braining out on it ASAP!! I did do a fairly in-depth study into nearly every science before this (starting with quantum mechanics, then astrophysics, nuclear physics, chemistry, geology (incl. geophysics, geochemistry, geomorphology, etc.), climatology, glaciology, biochemistry & biology, evolution (incl. human evolution), neurology, psychology (linking those two is like dabbling in black magic...), sociology, economics, politics, architecture, engineering, all manner of history, and modern technology. It was a .... fun few months!), and I managed to very crudely simulate cosmic filaments in the older version:

newniverse__big__by_embassyoftime-db3rdr

The problem is that they are static, they do not develop from an older form. I am hoping to put together something that will take a fairly random dispersal of masses and have them gather into something coherent and useful. I did a 50 particle raw test run, with only Newton's universal gravitation to guide it. It got fairly interesting results (note that movement lines are enabled, because there is no animation to see what happens):

abstract_universe_by_embassyoftime-dbcch

It may look like just a splatter, but if you know what to look for (I am guessing you might), you can see a few masses shoot or flung completely out of the main system, but several others gathering in a few clusters, many near to each other. I am currently looking for a way to add more particles while reducing CPU burn (it does a complete n2 gravity comparison). Not that this is my main question; the stuff you gave will hold me over for some time! But once I get a better way of doing this, my hope is to consider the simulated masses to be 'mass centers', representing more smaller masses, possibly in some lattice configuration. I still don't know, and any and all input is welcome, astrophysicist or not :)

Edited by Embassy of Time
1

Share this post


Link to post
Share on other sites

Posted (edited)

I am currently looking for a way to add more particles while reducing CPU burn (it does a complete n2 gravity comparison). Not that this is my main question; the stuff you gave will hold me over for some time! But once I get a better way of doing this, my hope is to consider the simulated masses to be 'mass centers', representing more smaller masses, possibly in some lattice configuration.

 

You're on to a good approach. I'll try to help you forward.

You don't need to compare every particle against every other particle.  If you want to know the g-force exerted by a distant cloud of particles, you can simply calculate the g-force exerted by the center of mass of that cloud of particles.  You don't need to calculate the individual g-forces for each particle.  However, if you are inside the cloud (rather then far away) then this method will fail.  This behavior is described by Newton's Shell Theorem.

So imagine that a simulated universe has "west", "middle", and "east" sides. If you have a particle on the "west" side, you don't need to compare it against every single particle on the "east" side. All you need to do is compare it against the center of mass of the east side.

Divide the universe into cells. For each cell, you compute the center of mass of all the particles inside it. Then, for every particle, you calculate the forces from 1) all the nearby particles and 2) all the farther away cells. How you define "near" and "far" will determine how accurate the result is. You can implement the cells with a spatial hash, but you might be better off with an octree.  The tree has a higher setup and memory cost but will result in fewer g-force calculations.

Here's pseudo-code for calculating gravity from an existing octree. 

func evalCell(Particle p, Cell cell) {  // Evaluate current cell
 if(isFarEnough(p, cell)) {             // Far enough away?
  applyGravity(p, cell);                //  Compute cell gravity force
 } else {                               // Not far enough?
  foreach(Cell child : cell.children) { 
   evalCell(p, cell);                   // Evaluate each sub-cell. 
}}}

This will only get you down to O(n log n). You can make a few optimizations or throw a few more cpu cores at the problem, but what you are doing is inherently expensive.  Generating interesting simulations is going to require massive numbers of particles.

So implement this algorithm, but also plan on implementing ways to fudge the results that you want. For example, you could have small particles merge into larger particles or pseudo-dark matter that provides a static anchor for particles to attract to (without adding much computational cost).

Edited by Kem7
0

Share this post


Link to post
Share on other sites

I did do a fairly in-depth study into nearly every science before this (starting with quantum mechanics, then astrophysics, nuclear physics, chemistry, geology (incl. geophysics, geochemistry, geomorphology, etc.), climatology, glaciology, biochemistry & biology, evolution (incl. human evolution), neurology, psychology (linking those two is like dabbling in black magic...), sociology, economics, politics, architecture, engineering, all manner of history, and modern technology.
 

Yes, I did read through some of them.  I have to say most of those are completely irrelevant to the problem of the formation of structure in the Universe, hehe :-) .  But having a broad knowledge is always a good thing before embarking on an epic project.  However, its also not good to get lost amongst all that since it might be tricky figuring out where on Earth do you start (believe me, I've done the same thing).

 

The problem is that they are static, they do not develop from an older form. I am hoping to put together something that will take a fairly random dispersal of masses and have them gather into something coherent and useful. I did a 50 particle raw test run, with only Newton's universal gravitation to guide it. It got fairly interesting results (note that movement lines are enabled, because there is no animation to see what happens):
 

Okay, first of all the large-scale filamentary structures that develop in the Universe are kind of static in a way, although they are more the hubs of growth of galaxies and clusters of galaxies.  Take this famous simulation of the Universe, the Millennium Simulation (https://wwwmpa.mpa-garching.mpg.de/galform/virgo/millennium/#slices);  if you look at the slice images of different redshifts/z-values (which is the same as different ages of the Universe).  You can see the filaments start very thin but then grow and accumulate mass but they don't really move around that much.  What you want (if you really want to mimic the evolution of the Universe) is something that starts roughly uniform and then filaments grow out of the background, gravitationally attracting more and more mass creating large galaxies and clusters.  And the trick will be to do this without actually computing all that expensive gravity.

 

About your 50 particle run, that actually looks exactly what these simulations should look like with basic gravity so guess you're on the right track ;-) .  You'll occasionally see stars get ejected and also binary stars (you can see the two stars orbitting each other almost getting kicked out).  Here's a famous (in Astrophysics) example with just 3 stars but has similar features to yours (http://www.ucolick.org/~laugh/oxide/projects/burrau.html).

 

But back to the issue of computing for more bodies.  The reply above is correct in that you would need some kind of tree to efficiently compute gravity for larger numbers.  However, you'll only need this when the numbers are of order 1000 or more and by this point, it takes (on a single CPU) about 0.01s to compute all the forces for all bodies.  If you want a game running at 30/60 fps, you're already near the limit so it's questionable whether using a tree is worth it.  Also, as I said in my previous post, this is the route to actually computing the real physics terms which is NOT what you want to be doing!  You want to be able to mimic these so it looks real rather than doing all the hard work.  

 

I was trying to make this point yesterday (probably not well enough) with the globular clusters.  Globular clusters have 1,000,000s+ of stars and there's no way in hell you could model that with computing the real gravitational force.  But these clusters have a simple average density and potential field (given by the equations in the wiki article) and using that instead will give you the motion of the star around the cluster, so 1 equation not 100000...s with real gravity.  There are sophistications that can be made of course, such as finding if there are any nearby interactions with stars to model scattering and ejections but these are the icing on the cake really!  I'm a bit busy atm, but maybe I'll put together some pseudo-code for you to see what I mean because it's tricky stuff.  But it'll have to be in a few days sorry!

 

In the meantime, don't drown on all the knowledge!  :P

1

Share this post


Link to post
Share on other sites

Posted (edited)

In the meantime, don't drown on all the knowledge!   :P

When I one day have to go, I want THAT to be the way I go out!!! And have it on my tombstone, "In knowledge, he drowned"! But on to more pressing matters....

I am going to view your two replies (Kem7 and sergamer1) as one, because this is good stuff both, and deserves to be combined, Voltron-style...

Regarding using octrees or similar methods, this is something I am poking around with a lot at the moment. I actually managed to get a fairly solid 1000-particle (yes, one thousand) to run smoothly on my small laptop, with some trickery, but I am now looking into possibilities for bigger versions. It should be noted that although I am trying to run relatively fast simulations right now, the game will not run them this fast, at least not when getting into details. The large-scale astrophysics ('small-scale' being interactions inside a star system, or between close stars) is to set up the stage for the later game. Foreplay, in a manner of speaking (sorry).

The 1000 run basically divides particles into time slots, matching ten particles with every other particle each cycle. Note that when particles are matched, BOTH particles are influenced, so a complete cycle through all particles actually matches each set twice (once from either particle's POV). I am skeptical of the octree approach, because many particles can easily end up so close that they break the advantage, and the movement of particles also indicates that octree borders (between cells) will cause problems. Either two close particles will not respond realistically to their proximity (because one is counted into a larger cell average), or any cell has to treat its closest cells on a particle-by-particle basis anyway, which makes estimating the CPU load a nightmare. Also, my OCD takes over and screams "octrees are not real!!", convincing me that there will be a noticeable effect that shows where cells are, in the motion of particles.

What I have been pondering for a day or two is to not use an octree, but instead treat clusters of particles as their own 'cells'. One way is to somehow calculate where the density of particles is highest, and treat those places as cluster cell; anything inside acts on anything inside on a particle-by-particle basis, while anything outside interacts with the cluster as a single 'cluster particle'. An alternate way is to have A and B particles, where perhaps 100-200 A particles use the full simulation, and B particles only react to the 10 or so nearest A particles, and in return strengthen the A particles when they get close to them. This approach, I am thinking, would simulate the idea that it is the closest others that have by far the greatest influence, while still letting the entire universe have some influence through the influences on the A particles. Yeah, that's still a lot of influences, but the math should, theoretically, be far less CPU demanding. Your views on the matter??

Regarding the cosmic filaments, I am actually a bit confused. I studied them a lot in the early stages (and walls and voids and so on), but I always felt that they were an "exploding liquid" kind of thing, distribution-wise! They look like a still shot of the water inside a water balloon during a powerful burst, and it would seem logical that the universe grew in a similar way, with denser areas drawing in star stuff more as they drifted apart. But what you (sergamer1) describe makes them seem almost static? The narrowing makes sense, they attract their surroundings, but I would have thought that nodes and filaments themselves moved, at least somewhat, as the universe expanded. No?

And yes, a lot of stars (technically, mass centers) get ejected from the simulation. I have made the initial 'shape' of the universe spherical now, reducing the ejections (corner particles had much weaker ties to the center mass), and the results are more pleasing, with very few ejections. This may be less realistic and more practical (fewer wasted particles), though. But neat that it seems to match what the real pros are getting, thanks for letting me know!!

And then, the globular clusters. My goal moving ahead is to not dictate galaxy shapes ("this is a spiral galaxy, this is an elliptical galaxy", etc.), but instead let galaxies gather on their own and see what happens (guiding them to do so, of course). But one thing I have tried to track down math on is the idea of treating all particles in a cluster (globular or not) as a single entity for each particle to compare gravitation to. The problem is that a particle will be inside that larger entity. But clusters of thousands should be possible to describe with an overall density, and notes on irregularities inside (like nearby spiral arms), so that the position of a particle (star, likely, or binary system already formed) inside it would, on its own, describe how the center and density of the whole thing affects it. A roughly equal dispersal should mean you know how much is in front, behind, beside, above and below you, depending on your position. Some kind of volumetric calculation, I'm guessing. Is there any accessible math on that??

And yes, most of the other sciences are useless at this point. I'm a nerd, it makes me happy to list sciences, but I also just wanted to communicate that I have studied things from several angles. Anyone reading my topic could be excused to think I was jumping in completely unaware of the complexity. I'm not, I'm just insane enough to laugh in the face of complexity :P

Edited by Embassy of Time
0

Share this post


Link to post
Share on other sites

Posted (edited)

The 1000 run basically divides particles into time slots, matching ten particles with every other particle each cycle.


This is effectively the same as integrating with a larger timestep, which introduces accuracy problems. You'll be better off implementing an adaptive timestep, so that objects that don't have much force applied to them don't get evaluated quite as often.  Also, if you're using Eulerian integration (i.e. pos += vel * timstep) then consider something like leapfrog or, if you have the patience, rk4.
 

What I have been pondering for a day or two is to not use an octree, but instead treat clusters of particles as their own 'cells'.


If you take this path, you'll probably wind up back at the tree.  Regarding your concerns:

1) Crossing bounds:  Rebuild the tree every frame.  Since the particles are all in motion, everything needs to be recalculated anyway.

2) Accuracy: If you simply compare a point against all other cells that don't include that point, then yes you will run into problems where you can see the grid pattern distorting the path of the particles.  What you need to do is compare against other cells only when they are far enough away.

Below is an example of gravity using a quadtree in two dimensions. I haven't done anything like this in years, so I thought it'd be fun if I wrote some example code to refresh myself.  

https://codepen.io/kemick/full/XgjXPL/

Things got out of hand. It's about 500 lines, though most of that is boilerplate and comments.  The tree code is under 200 lines.  Either way, it implements both the tree and the traditional check-every-pair methods and you can switch between them at will.  It also draws the tree structure so that you can see how easily it is able to partition space and identify clusters of mass.

My goal was to make the tree code as simple as possible, but I was forced to do a few "clever" things to make it run decently (e.g. pooling node objects to avoid the gc).  Hopefully those don't impede readability too much.  At the moment, the biggest cost is simply drawing everything.  If you're comparing fps between tree and regular modes, then do it with "show cells" turned off to make it fair.  Also, for large numbers of particles the node pool dynamically increases its size, so it'll be just a bit slow until it "warms up".

The "fudge" value determines whether nodes are far enough away to be approximated or whether they need to be broken down into sub-nodes.  At large fudge values ( > 1) you will start to see artifacts from the tree.  At smaller fudge values ( < 0.5 ) you'll start to run into performance problems.  

On my computer, once you're dealing with 1000 particles the quad tree is clearly faster.  The tradeoff, of course, is increased memory usage.  This can be mitigated by reducing the node size (the nodes in this example code are enormous).  Additionally, this algorithm builds the tree until every particle has its own cell, but it would be trivial to add a limit to how deep the tree can go and have cells that contain numerous particles.  This would allow you to balance memory and cpu usage.

Also, I make no guarantee that the code is bug-free.  It was written in a coffee-fueled evening of binge programming.  I'm pretty sure there's a bug in the insertion code.  But it should be enough to convey the concept.

Edited by Kem7
1

Share this post


Link to post
Share on other sites

This is effectively the same as integrating with a larger timestep, which introduces accuracy problems. You'll be better off implementing an adaptive timestep, so that objects that don't have much force applied to them don't get evaluated quite as often.  Also, if you're using Eulerian integration (i.e. pos += vel * timstep) then consider something like leapfrog or, if you have the patience, rk4.

The accuracy problems did worry me, but at 1000 particles, they barely register (I'm back at 500 particles now for some added experimentation, but I ran it with many higher counts). I doubt timestepping will be worth basing everything on; the higher the count, the higher the inaccuracies. So it is only an experiment for now, and solely for some "core particles". I will read up on the other methods you mention (thank you!!), although leapfrog, of course, gives a lot of irrelevant search results. Do you have a link to something I can read up on?

Below is an example of gravity using a quadtree in two dimensions. I haven't done anything like this in years, so I thought it'd be fun if I wrote some example code to refresh myself.  
https://codepen.io/kemick/full/XgjXPL/

Neat!! You're selling me more and more on the octree. I am currently running a test with multi-tier (well, two-tier) particle hierachy, but I now feel obligated to doing the octree at some point in the near future! And of course playing around with your simulation. Colored dots are very addictive....!

0

Share this post


Link to post
Share on other sites

Posted (edited)

The accuracy problems did worry me, but at 1000 particles, they barely register (I'm back at 500 particles now for some added experimentation, but I ran it with many higher counts). I doubt timestepping will be worth basing everything on; the higher the count, the higher the inaccuracies. So it is only an experiment for now, and solely for some "core particles". I will read up on the other methods you mention (thank you!!), although leapfrog, of course, gives a lot of irrelevant search results. Do you have a link to something I can read up on?

 

One of the biggest issues with these simulations is when particles get close.  As a particles distance approaches 0, the force applied approaches infinity.  So if a particle passes close by a gravity source, it will be ejected out with a lot more energy that it had going in.  It's going so fast on the way out that the gravity source will never pull as hard as it did on the way in.

The fix to this problem is a smaller time step.  However, a time step small enough to handle very close particles will drag the whole simulation to a crawl.  If you want both speed and accuracy, then an adaptive timestep is the way to go.  It will allow most of the particles to run at a much larger timestep than if you applied the same timestep to every particle.  Implementing this does require that you be a little more picky about how you integrate things, though.

Anyway, Wikipedia has a good description of the leapfrog method, though it is a bit cryptic.  The basic idea is that you interleave the evaluation of the position and velocity, so that the position is evaluated at dt * 1, dt * 2, etc. while the velocity/gravity is evaluated at dt * 1.5, dt * 2.5, etc..  The basic form of the leapfrog integrator would look like this:

newGforce = getGravityForce(this)
newPosition += oldVelocity * dt + (oldforce * (dt * dt))/2;
newVelocity += dt * (newGforce + oldGforce) / 2
oldGforce = newGforce

The only other thing to do is to set oldgeforce to some appropriate initial value (as it's off by 0.5 of a timestep).

 

Neat!! You're selling me more and more on the octree. I am currently running a test with multi-tier (well, two-tier) particle hierachy, but I now feel obligated to doing the octree at some point in the near future! And of course playing around with your simulation. Colored dots are very addictive....!

 

When I did my first gravity simulations, I used a grid to partition space.  I quickly realized that I could merge grid cells to reduce the number of comparisons.  And, once I merged enough grid cells, I realized that I was building a tree (but backward).

So it's not an either/or sort of thing.  The tree is just a data structure that generalizes the merging of grid cells.  The concepts are all the same. The only difference is in the implementation.  In this implementation, the tree splits according to the space it covers and is sparse and so does not create grid cells for empty space.  You could easily hand-code a 2 or 3-depth tree (splitting space into 4+1 or 16+4+1 cells for a quadtree) with minimal cpu and memory overhead if that's all the partitioning you need.

Edited by Kem7
1

Share this post


Link to post
Share on other sites

[...]
 
One of the biggest issues with these simulations is when particles get close.  As a particles distance approaches 0, the force applied approaches infinity.  So if a particle passes close by a gravity source, it will be ejected out with a lot more energy that it had going in.


If you give particles a radius, so they represent ghostly spheres that can go through each other, the force between two particles is proportional to the 1/distance^2 if they are farther away than 2 radii, but it's proportional to distance if the spheres intersect. That gets rid of the problem of huge forces when the particles are too close together.
2

Share this post


Link to post
Share on other sites

 

[...]
 
One of the biggest issues with these simulations is when particles get close.  As a particles distance approaches 0, the force applied approaches infinity.  So if a particle passes close by a gravity source, it will be ejected out with a lot more energy that it had going in.


If you give particles a radius, so they represent ghostly spheres that can go through each other, the force between two particles is proportional to the 1/distance^2 if they are farther away than 2 radii, but it's proportional to distance if the spheres intersect. That gets rid of the problem of huge forces when the particles are too close together.

 

 

You can also check against a minimum distance in the gravity calculation or, if you're lazy like me, just don't apply any force within a certain radius or impose a speed limit.  Wikipedia has a stub on a more formal method that looks good: https://en.wikipedia.org/wiki/Softening

However, these don't really solve the inherent problem:  A particle samples the gravity field as it moves.  The faster it moves, the fewer samples it can collect. The fewer samples it can collect, the less accurate the result.  The only way to get an accurate result is to shorten the timestep.  If this results in unacceptable performance for your desired accuracy, then the solution is an adaptive timestep.

1

Share this post


Link to post
Share on other sites

I simply cheated: If a particle is too close, it keeps its velocity unchanged. It then goes by the other and regains enough distance to calculate properly. Yeah, before I did that, those things went flying EVERYWHERE!

Not sure I fully understand the advantage of the leapfrog or midpoint? I mean, the implementation seems easy enough, but what do I get out of it?

Asfor the octree, I keep getting more keen on it. I have another experiment running 25,000 particles at okay rate, but at some point, I clearly need to familiarize myself with some octree algos!

0

Share this post


Link to post
Share on other sites

However, these don't really solve the inherent problem:  A particle samples the gravity field as it moves.  The faster it moves, the fewer samples it can collect. The fewer samples it can collect, the less accurate the result.  The only way to get an accurate result is to shorten the timestep.  If this results in unacceptable performance for your desired accuracy, then the solution is an adaptive timestep.

This is the  eternal issue, isn't it. What bothers me is that my brain keeps telling me that there should be a way to compress all the forces into a single equation (possibly one per particle) and find the position of the particle as a function of time. Like if you calculate two bodies influencing each other; you can (to simplify it) calculate a barycenter and simply input the point in time you want to sample their positions and voila, the equation returns the positions. There would be no need for timestep, and CPU pressure is next to nothing (compared tot he current options available, anyway). I have even done some theoretical work on how to move from two to three bodies, hoping that it will let me figure out how to do it with n bodies, but it seems a bit.... challenging. So for now, lesser methods apparently need to be used....

1

Share this post


Link to post
Share on other sites

I simply cheated: If a particle is too close, it keeps its velocity unchanged. It then goes by the other and regains enough distance to calculate properly. Yeah, before I did that, those things went flying EVERYWHERE!

Okay, that's one solution indeed!  :-)   I guess it's important to remind yourself, what are you trying to achieve.  Are you trying to model the actual physics or make something that LOOKS real for gaming purposes?  If the latter, then tricks like this or softening which was mentioned earlier (which is used in many 'real' science/astro simulations btw), are perfectly valid.  Regarding the suggestion about adaptive timesteps, this is again something you would definitely do to follow the physics more accurately but then you have unpredictable rates since your simulation slows down whenever two stars get close and then speeds up again, all very jerky and unstable.  So a fixed timestep works fine if your not bothered about getting things right with close interactions.  But if you want to model the physics accurately (out of some scientific curiosity for instance) then definitely consider it.

 

Not sure I fully understand the advantage of the leapfrog or midpoint? I mean, the implementation seems easy enough, but what do I get out of it?

The advantage is you get more accurate integration of the orbits (actually important for both scientifically accurate simulations and for making it look better) since leapfrog/mid-point gives second-order errors compared to Euler integration which is first-order.  A leapfrog integrator is also a special kind of integrator (called a symplectic scheme) which has wonderful conservation properties which amongst others preserves circular and elliptical orbits (e.g. planets or binaries) very well.  The thing is, all these schemes are as expensive as each other, just one force computation per timestep so in that case you would use the better one right? :-) 

 

This is the  eternal issue, isn't it. What bothers me is that my brain keeps telling me that there should be a way to compress all the forces into a single equation (possibly one per particle) and find the position of the particle as a function of time. Like if you calculate two bodies influencing each other; you can (to simplify it) calculate a barycenter and simply input the point in time you want to sample their positions and voila, the equation returns the positions. There would be no need for timestep, and CPU pressure is next to nothing (compared tot he current options available, anyway). I have even done some theoretical work on how to move from two to three bodies, hoping that it will let me figure out how to do it with n bodies, but it seems a bit.... challenging. So for now, lesser methods apparently need to be used....

I'm going to have to disappoint you on this one.  You are of course correct for two bodies.  This problem was solved by Sir Isaac Newton some 300+ years ago :-) .  And in the intermediate centuries, everybody and their cat has tried to add just one more body to solve the so-called 3-body problem.  But there is no general solution I'm afraid that can be used like you hope for.  There are various special so-called 'restricted' 3-body problems where the motion of 1 body is limited in some way so it becomes a 2-body problem again or has some special solution for some configuration.  But in general, and especially when this becomes 4, 5 ... N bodies, you have to do the numerical integration I'm afraid!  This is one of the step-ups from linear/soluable equations to non-linear equations.  All the 'geniuses' of yesteryear solved the 'easy' linear stuff :-D and left all the difficult non-linear stuff to our generation!  :-/   Oh well ... 

 

Regarding the oct-tree, I'm still a little unsure if you really need it for gravity integration (for the reasons discussed earlier about the ~1000 body limit).  However, if you want to model some kind of larger Universe with large-scale structure and then 'zoom' into galaxies and further, then having a tree structure to organise this data hierarchically is obviously a good thing.  If you can combine it with helping to compute the gravity, then great but I'm still unsure if you'll get that much benefit while maintaining 30/60fps.

0

Share this post


Link to post
Share on other sites

However, these don't really solve the inherent problem:  A particle samples the gravity field as it moves.  The faster it moves, the fewer samples it can collect. The fewer samples it can collect, the less accurate the result.  The only way to get an accurate result is to shorten the timestep.  If this results in unacceptable performance for your desired accuracy, then the solution is an adaptive timestep.

This is the  eternal issue, isn't it. What bothers me is that my brain keeps telling me that there should be a way to compress all the forces into a single equation (possibly one per particle) and find the position of the particle as a function of time. Like if you calculate two bodies influencing each other; you can (to simplify it) calculate a barycenter and simply input the point in time you want to sample their positions and voila, the equation returns the positions. There would be no need for timestep, and CPU pressure is next to nothing (compared tot he current options available, anyway). I have even done some theoretical work on how to move from two to three bodies, hoping that it will let me figure out how to do it with n bodies, but it seems a bit.... challenging. So for now, lesser methods apparently need to be used....


I really can't tell if you are trolling, at this point. I'll feed you a Wikipedia page about the subject: https://en.wikipedia.org/wiki/N-body_problem . No, you will not find an analytical solution to anything beyond n=3.
0

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now