Gravity systems?

Started by
43 comments, last by Embassy of Time 6 years, 9 months ago

Oh no, no no no, don't you worry, I am NOT simulating a full universe!!!!

Yes, I didn't feel like you were planning that from your original post but the way the thread was going, it seemed like things were pointing that direction which I thought I should just point out right now is a very bad idea, unless you want to train to be an Astrophysicist yourself :-)

At the moment, I am trying to put the first tiny pieces into action, so to speak, and I just want to avoid wasting time on a lot of overly complicated math if there is one or more shortcuts that can be taken. I just got basic gravity (Newton's law of universal gravitation) working, for example. I know the games / simulations you list, and I doubt that the parts of mine that simulates the cosmos will be even that advanced! All I want is something that can claim to be based in science, not something that is scientifically accurate. I know (some of) my limitations

So, the Universe is a pretty big topic to discuss so where to begin :-) . Newton's law of gravity is certainly useful for very basic motions, such as how planets (or moons or rocket ships) may orbit around stars (or other planets) but will be too expensive if trying to model larger systems with more than a dozen (or perhaps a couple of hundred) objects. What you should be looking at is developing (or reading about) simple models that use simple mathematical equations to represent the gravitational potential field or simple combinations and then your physical objects (e.g. galaxies, stars, etc..) move as test particles (https://en.wikipedia.org/wiki/Test_particle) in these potential fields rather than computing the full insanely expensive gravitational. Let's take a basic galaxy as an example. It contains

  • A spherically symmetric halo of dark matter. It doesn't do anything interesting really but holds the galaxy together gravitationally and can be approximated by a simple equation (e.g an NFW profile, https://en.wikipedia.org/wiki/Navarro%E2%80%93Frenk%E2%80%93White_profile)
  • A halo of old stars, many contained in very old massive clusters called globular clusters. The globular clusters can be very easily modelled with something called a Plummer potential profile (https://en.wikipedia.org/wiki/Plummer_model)
  • The gas in the galaxy (often called the Interstellar medium or ISM) lies in the disc of the galaxy but often concentrated in spiral arms, as like the classic picture of galaxies (There's no simple article for this sorry, but maybe this helps : https://en.wikipedia.org/wiki/Gravitationally_aligned_orbits)
  • The gas is compressed in these arms leading to new stars forming from the gas. The density of gas and the formation of new stars has several empirical relations, like the Kennicutt-Schmidt relation, which allows you to approximately model how this happens (Sorry again, no simple article on this). These stars then spread into the disc of the galaxy and move around in (approximately) circular orbits around a spiral potential.

And that's just to model a typical everyday spiral galaxy that you might find. If you want to model how this forms (in a hand-wavy way) from the big bang, then you need to consider the large-scale picture like the formation of Galaxy clusters, how these protogalaxies merge and form the next generation of galaxies. That's not quite my own field but I know enough to know it's possible to do in some nice approximate way like you are probably suggesting. You could for example, model a Universe with various smaller galaxy seeds (but not too many unless you want the performance issues again). There would need to be some underlying filament potential which describes the large scale structure of the Universe (e.g. https://phys.org/news/2014-04-cosmologists-cosmic-filaments-voids.html) and then your test particles will move towards them to mimic the natural formation of the large-scale structure.

As you might guess, there is so much to go into this (and even I'm not 100% sure of all the steps) but I hope this little post has helped explain a possible path to modelling a Universe on the cheap! ;-)

Of course, now that you have 'outed' yourself as an astrophysicist, this debate might get a whole lot more interesting ;-)

Haha, guilty as charged! Never thought I was hiding though! :P

Incidentally, I can't find anything on a King Profile, do you have a good link explaining that?

Hmmm, unfortunately some of these 'basic' things don't have nice pages for the layman like wiki. There's a ton of astro articles, such as University pages, if you type 'King cluster profile' into google but not sure which one is good to recommend. But there's hopefully enough info and links here to help for now! ;-)

Advertisement

<all of it>

Yeeeesssss...... The knowledge must floooooww......

*cough* That's some good stuff, everything is already open in browser tabs and I will be braining out on it ASAP!! I did do a fairly in-depth study into nearly every science before this (starting with quantum mechanics, then astrophysics, nuclear physics, chemistry, geology (incl. geophysics, geochemistry, geomorphology, etc.), climatology, glaciology, biochemistry & biology, evolution (incl. human evolution), neurology, psychology (linking those two is like dabbling in black magic...), sociology, economics, politics, architecture, engineering, all manner of history, and modern technology. It was a .... fun few months!), and I managed to very crudely simulate cosmic filaments in the older version:

newniverse__big__by_embassyoftime-db3rdr

The problem is that they are static, they do not develop from an older form. I am hoping to put together something that will take a fairly random dispersal of masses and have them gather into something coherent and useful. I did a 50 particle raw test run, with only Newton's universal gravitation to guide it. It got fairly interesting results (note that movement lines are enabled, because there is no animation to see what happens):

abstract_universe_by_embassyoftime-dbcch

It may look like just a splatter, but if you know what to look for (I am guessing you might), you can see a few masses shoot or flung completely out of the main system, but several others gathering in a few clusters, many near to each other. I am currently looking for a way to add more particles while reducing CPU burn (it does a complete n2 gravity comparison). Not that this is my main question; the stuff you gave will hold me over for some time! But once I get a better way of doing this, my hope is to consider the simulated masses to be 'mass centers', representing more smaller masses, possibly in some lattice configuration. I still don't know, and any and all input is welcome, astrophysicist or not :)

[DEDACTED FOR SECURITY REASONS]

I am currently looking for a way to add more particles while reducing CPU burn (it does a complete n2 gravity comparison). Not that this is my main question; the stuff you gave will hold me over for some time! But once I get a better way of doing this, my hope is to consider the simulated masses to be 'mass centers', representing more smaller masses, possibly in some lattice configuration.

You're on to a good approach. I'll try to help you forward.

You don't need to compare every particle against every other particle. If you want to know the g-force exerted by a distant cloud of particles, you can simply calculate the g-force exerted by the center of mass of that cloud of particles. You don't need to calculate the individual g-forces for each particle. However, if you are inside the cloud (rather then far away) then this method will fail. This behavior is described by Newton's Shell Theorem.

So imagine that a simulated universe has "west", "middle", and "east" sides. If you have a particle on the "west" side, you don't need to compare it against every single particle on the "east" side. All you need to do is compare it against the center of mass of the east side.

Divide the universe into cells. For each cell, you compute the center of mass of all the particles inside it. Then, for every particle, you calculate the forces from 1) all the nearby particles and 2) all the farther away cells. How you define "near" and "far" will determine how accurate the result is. You can implement the cells with a spatial hash, but you might be better off with an octree. The tree has a higher setup and memory cost but will result in fewer g-force calculations.

Here's pseudo-code for calculating gravity from an existing octree.


func evalCell(Particle p, Cell cell) {  // Evaluate current cell
 if(isFarEnough(p, cell)) {             // Far enough away?
  applyGravity(p, cell);                //  Compute cell gravity force
 } else {                               // Not far enough?
  foreach(Cell child : cell.children) { 
   evalCell(p, cell);                   // Evaluate each sub-cell. 
}}}

This will only get you down to O(n log n). You can make a few optimizations or throw a few more cpu cores at the problem, but what you are doing is inherently expensive. Generating interesting simulations is going to require massive numbers of particles.

So implement this algorithm, but also plan on implementing ways to fudge the results that you want. For example, you could have small particles merge into larger particles or pseudo-dark matter that provides a static anchor for particles to attract to (without adding much computational cost).

I did do a fairly in-depth study into nearly every science before this (starting with quantum mechanics, then astrophysics, nuclear physics, chemistry, geology (incl. geophysics, geochemistry, geomorphology, etc.), climatology, glaciology, biochemistry & biology, evolution (incl. human evolution), neurology, psychology (linking those two is like dabbling in black magic...), sociology, economics, politics, architecture, engineering, all manner of history, and modern technology.

Yes, I did read through some of them. I have to say most of those are completely irrelevant to the problem of the formation of structure in the Universe, hehe :-) . But having a broad knowledge is always a good thing before embarking on an epic project. However, its also not good to get lost amongst all that since it might be tricky figuring out where on Earth do you start (believe me, I've done the same thing).

The problem is that they are static, they do not develop from an older form. I am hoping to put together something that will take a fairly random dispersal of masses and have them gather into something coherent and useful. I did a 50 particle raw test run, with only Newton's universal gravitation to guide it. It got fairly interesting results (note that movement lines are enabled, because there is no animation to see what happens):

Okay, first of all the large-scale filamentary structures that develop in the Universe are kind of static in a way, although they are more the hubs of growth of galaxies and clusters of galaxies. Take this famous simulation of the Universe, the Millennium Simulation (https://wwwmpa.mpa-garching.mpg.de/galform/virgo/millennium/#slices); if you look at the slice images of different redshifts/z-values (which is the same as different ages of the Universe). You can see the filaments start very thin but then grow and accumulate mass but they don't really move around that much. What you want (if you really want to mimic the evolution of the Universe) is something that starts roughly uniform and then filaments grow out of the background, gravitationally attracting more and more mass creating large galaxies and clusters. And the trick will be to do this without actually computing all that expensive gravity.

About your 50 particle run, that actually looks exactly what these simulations should look like with basic gravity so guess you're on the right track ;-) . You'll occasionally see stars get ejected and also binary stars (you can see the two stars orbitting each other almost getting kicked out). Here's a famous (in Astrophysics) example with just 3 stars but has similar features to yours (http://www.ucolick.org/~laugh/oxide/projects/burrau.html).

But back to the issue of computing for more bodies. The reply above is correct in that you would need some kind of tree to efficiently compute gravity for larger numbers. However, you'll only need this when the numbers are of order 1000 or more and by this point, it takes (on a single CPU) about 0.01s to compute all the forces for all bodies. If you want a game running at 30/60 fps, you're already near the limit so it's questionable whether using a tree is worth it. Also, as I said in my previous post, this is the route to actually computing the real physics terms which is NOT what you want to be doing! You want to be able to mimic these so it looks real rather than doing all the hard work.

I was trying to make this point yesterday (probably not well enough) with the globular clusters. Globular clusters have 1,000,000s+ of stars and there's no way in hell you could model that with computing the real gravitational force. But these clusters have a simple average density and potential field (given by the equations in the wiki article) and using that instead will give you the motion of the star around the cluster, so 1 equation not 100000...s with real gravity. There are sophistications that can be made of course, such as finding if there are any nearby interactions with stars to model scattering and ejections but these are the icing on the cake really! I'm a bit busy atm, but maybe I'll put together some pseudo-code for you to see what I mean because it's tricky stuff. But it'll have to be in a few days sorry!

In the meantime, don't drown on all the knowledge! :P

In the meantime, don't drown on all the knowledge! :P

When I one day have to go, I want THAT to be the way I go out!!! And have it on my tombstone, "In knowledge, he drowned"! But on to more pressing matters....

I am going to view your two replies (Kem7 and sergamer1) as one, because this is good stuff both, and deserves to be combined, Voltron-style...

Regarding using octrees or similar methods, this is something I am poking around with a lot at the moment. I actually managed to get a fairly solid 1000-particle (yes, one thousand) to run smoothly on my small laptop, with some trickery, but I am now looking into possibilities for bigger versions. It should be noted that although I am trying to run relatively fast simulations right now, the game will not run them this fast, at least not when getting into details. The large-scale astrophysics ('small-scale' being interactions inside a star system, or between close stars) is to set up the stage for the later game. Foreplay, in a manner of speaking (sorry).

The 1000 run basically divides particles into time slots, matching ten particles with every other particle each cycle. Note that when particles are matched, BOTH particles are influenced, so a complete cycle through all particles actually matches each set twice (once from either particle's POV). I am skeptical of the octree approach, because many particles can easily end up so close that they break the advantage, and the movement of particles also indicates that octree borders (between cells) will cause problems. Either two close particles will not respond realistically to their proximity (because one is counted into a larger cell average), or any cell has to treat its closest cells on a particle-by-particle basis anyway, which makes estimating the CPU load a nightmare. Also, my OCD takes over and screams "octrees are not real!!", convincing me that there will be a noticeable effect that shows where cells are, in the motion of particles.

What I have been pondering for a day or two is to not use an octree, but instead treat clusters of particles as their own 'cells'. One way is to somehow calculate where the density of particles is highest, and treat those places as cluster cell; anything inside acts on anything inside on a particle-by-particle basis, while anything outside interacts with the cluster as a single 'cluster particle'. An alternate way is to have A and B particles, where perhaps 100-200 A particles use the full simulation, and B particles only react to the 10 or so nearest A particles, and in return strengthen the A particles when they get close to them. This approach, I am thinking, would simulate the idea that it is the closest others that have by far the greatest influence, while still letting the entire universe have some influence through the influences on the A particles. Yeah, that's still a lot of influences, but the math should, theoretically, be far less CPU demanding. Your views on the matter??

Regarding the cosmic filaments, I am actually a bit confused. I studied them a lot in the early stages (and walls and voids and so on), but I always felt that they were an "exploding liquid" kind of thing, distribution-wise! They look like a still shot of the water inside a water balloon during a powerful burst, and it would seem logical that the universe grew in a similar way, with denser areas drawing in star stuff more as they drifted apart. But what you (sergamer1) describe makes them seem almost static? The narrowing makes sense, they attract their surroundings, but I would have thought that nodes and filaments themselves moved, at least somewhat, as the universe expanded. No?

And yes, a lot of stars (technically, mass centers) get ejected from the simulation. I have made the initial 'shape' of the universe spherical now, reducing the ejections (corner particles had much weaker ties to the center mass), and the results are more pleasing, with very few ejections. This may be less realistic and more practical (fewer wasted particles), though. But neat that it seems to match what the real pros are getting, thanks for letting me know!!

And then, the globular clusters. My goal moving ahead is to not dictate galaxy shapes ("this is a spiral galaxy, this is an elliptical galaxy", etc.), but instead let galaxies gather on their own and see what happens (guiding them to do so, of course). But one thing I have tried to track down math on is the idea of treating all particles in a cluster (globular or not) as a single entity for each particle to compare gravitation to. The problem is that a particle will be inside that larger entity. But clusters of thousands should be possible to describe with an overall density, and notes on irregularities inside (like nearby spiral arms), so that the position of a particle (star, likely, or binary system already formed) inside it would, on its own, describe how the center and density of the whole thing affects it. A roughly equal dispersal should mean you know how much is in front, behind, beside, above and below you, depending on your position. Some kind of volumetric calculation, I'm guessing. Is there any accessible math on that??

And yes, most of the other sciences are useless at this point. I'm a nerd, it makes me happy to list sciences, but I also just wanted to communicate that I have studied things from several angles. Anyone reading my topic could be excused to think I was jumping in completely unaware of the complexity. I'm not, I'm just insane enough to laugh in the face of complexity :P

[DEDACTED FOR SECURITY REASONS]

The 1000 run basically divides particles into time slots, matching ten particles with every other particle each cycle.


This is effectively the same as integrating with a larger timestep, which introduces accuracy problems. You'll be better off implementing an adaptive timestep, so that objects that don't have much force applied to them don't get evaluated quite as often. Also, if you're using Eulerian integration (i.e. pos += vel * timstep) then consider something like leapfrog or, if you have the patience, rk4.

What I have been pondering for a day or two is to not use an octree, but instead treat clusters of particles as their own 'cells'.


If you take this path, you'll probably wind up back at the tree. Regarding your concerns:

1) Crossing bounds: Rebuild the tree every frame. Since the particles are all in motion, everything needs to be recalculated anyway.

2) Accuracy: If you simply compare a point against all other cells that don't include that point, then yes you will run into problems where you can see the grid pattern distorting the path of the particles. What you need to do is compare against other cells only when they are far enough away.

Below is an example of gravity using a quadtree in two dimensions. I haven't done anything like this in years, so I thought it'd be fun if I wrote some example code to refresh myself.

https://codepen.io/kemick/full/XgjXPL/

Things got out of hand. It's about 500 lines, though most of that is boilerplate and comments. The tree code is under 200 lines. Either way, it implements both the tree and the traditional check-every-pair methods and you can switch between them at will. It also draws the tree structure so that you can see how easily it is able to partition space and identify clusters of mass.

My goal was to make the tree code as simple as possible, but I was forced to do a few "clever" things to make it run decently (e.g. pooling node objects to avoid the gc). Hopefully those don't impede readability too much. At the moment, the biggest cost is simply drawing everything. If you're comparing fps between tree and regular modes, then do it with "show cells" turned off to make it fair. Also, for large numbers of particles the node pool dynamically increases its size, so it'll be just a bit slow until it "warms up".

The "fudge" value determines whether nodes are far enough away to be approximated or whether they need to be broken down into sub-nodes. At large fudge values ( > 1) you will start to see artifacts from the tree. At smaller fudge values ( < 0.5 ) you'll start to run into performance problems.

On my computer, once you're dealing with 1000 particles the quad tree is clearly faster. The tradeoff, of course, is increased memory usage. This can be mitigated by reducing the node size (the nodes in this example code are enormous). Additionally, this algorithm builds the tree until every particle has its own cell, but it would be trivial to add a limit to how deep the tree can go and have cells that contain numerous particles. This would allow you to balance memory and cpu usage.

Also, I make no guarantee that the code is bug-free. It was written in a coffee-fueled evening of binge programming. I'm pretty sure there's a bug in the insertion code. But it should be enough to convey the concept.

Double Post

This is effectively the same as integrating with a larger timestep, which introduces accuracy problems. You'll be better off implementing an adaptive timestep, so that objects that don't have much force applied to them don't get evaluated quite as often. Also, if you're using Eulerian integration (i.e. pos += vel * timstep) then consider something like leapfrog or, if you have the patience, rk4.

The accuracy problems did worry me, but at 1000 particles, they barely register (I'm back at 500 particles now for some added experimentation, but I ran it with many higher counts). I doubt timestepping will be worth basing everything on; the higher the count, the higher the inaccuracies. So it is only an experiment for now, and solely for some "core particles". I will read up on the other methods you mention (thank you!!), although leapfrog, of course, gives a lot of irrelevant search results. Do you have a link to something I can read up on?

Below is an example of gravity using a quadtree in two dimensions. I haven't done anything like this in years, so I thought it'd be fun if I wrote some example code to refresh myself.
https://codepen.io/kemick/full/XgjXPL/

Neat!! You're selling me more and more on the octree. I am currently running a test with multi-tier (well, two-tier) particle hierachy, but I now feel obligated to doing the octree at some point in the near future! And of course playing around with your simulation. Colored dots are very addictive....!

[DEDACTED FOR SECURITY REASONS]

The accuracy problems did worry me, but at 1000 particles, they barely register (I'm back at 500 particles now for some added experimentation, but I ran it with many higher counts). I doubt timestepping will be worth basing everything on; the higher the count, the higher the inaccuracies. So it is only an experiment for now, and solely for some "core particles". I will read up on the other methods you mention (thank you!!), although leapfrog, of course, gives a lot of irrelevant search results. Do you have a link to something I can read up on?

One of the biggest issues with these simulations is when particles get close. As a particles distance approaches 0, the force applied approaches infinity. So if a particle passes close by a gravity source, it will be ejected out with a lot more energy that it had going in. It's going so fast on the way out that the gravity source will never pull as hard as it did on the way in.

The fix to this problem is a smaller time step. However, a time step small enough to handle very close particles will drag the whole simulation to a crawl. If you want both speed and accuracy, then an adaptive timestep is the way to go. It will allow most of the particles to run at a much larger timestep than if you applied the same timestep to every particle. Implementing this does require that you be a little more picky about how you integrate things, though.

Anyway, Wikipedia has a good description of the leapfrog method, though it is a bit cryptic. The basic idea is that you interleave the evaluation of the position and velocity, so that the position is evaluated at dt * 1, dt * 2, etc. while the velocity/gravity is evaluated at dt * 1.5, dt * 2.5, etc.. The basic form of the leapfrog integrator would look like this:


newGforce = getGravityForce(this)
newPosition += oldVelocity * dt + (oldforce * (dt * dt))/2;
newVelocity += dt * (newGforce + oldGforce) / 2
oldGforce = newGforce

The only other thing to do is to set oldgeforce to some appropriate initial value (as it's off by 0.5 of a timestep).

Neat!! You're selling me more and more on the octree. I am currently running a test with multi-tier (well, two-tier) particle hierachy, but I now feel obligated to doing the octree at some point in the near future! And of course playing around with your simulation. Colored dots are very addictive....!

When I did my first gravity simulations, I used a grid to partition space. I quickly realized that I could merge grid cells to reduce the number of comparisons. And, once I merged enough grid cells, I realized that I was building a tree (but backward).

So it's not an either/or sort of thing. The tree is just a data structure that generalizes the merging of grid cells. The concepts are all the same. The only difference is in the implementation. In this implementation, the tree splits according to the space it covers and is sparse and so does not create grid cells for empty space. You could easily hand-code a 2 or 3-depth tree (splitting space into 4+1 or 16+4+1 cells for a quadtree) with minimal cpu and memory overhead if that's all the partitioning you need.

[...]

One of the biggest issues with these simulations is when particles get close. As a particles distance approaches 0, the force applied approaches infinity. So if a particle passes close by a gravity source, it will be ejected out with a lot more energy that it had going in.


If you give particles a radius, so they represent ghostly spheres that can go through each other, the force between two particles is proportional to the 1/distance^2 if they are farther away than 2 radii, but it's proportional to distance if the spheres intersect. That gets rid of the problem of huge forces when the particles are too close together.

This topic is closed to new replies.

Advertisement