Jump to content
Site Stability Read more... ×
• Advertisement

# Procedural Universe: The illusion of infinity

This topic is 1264 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

## Recommended Posts

Hello,

This is my first time posting here, so nice to meet you everyone!

For some time now i'm trying to find a good solution for rendering a procedural generated universe.

I've read most of the articles on the subject (or at least what Google provided me) and i'm still satisfied with the information i have.

Since tutorials on this subject are almost non existent or outdated, i began trying to think my way out of this.

I was thinking that if we can use Octrees for collision detection, frustum culling and raytracing, why can't we use this beautiful data

structure to solve the universe rendering problem.

So, for what we know, we have 4 types of "spaces":

1. Intergalactic space

2. Interstellar space

3. Interplanetary space

4. Planet surface

The root cuboid of the universe would be the universe itself, using the parsec scale.

The children cuboids would represent the intergalactic space, using the lyghtyear scale, recursing all the way down to the surface of the planet.

Now i know that i would have to somehow interpolate the scales and camera projection settings between these spaces in order to have a smooth transition.

After getting inside a cuboid (interplanetary space), i would render the surroundings in a cubemap and use it as my "skybox".

Using the octree method i believe would give me plenty of advantages one of it being the ease with which i can generate galaxies, star systems and planets. For instance i could use a noise function to generate children for the root cuboid, thus generating procedural galaxies. I could apply the same algorithm for generating interplanetary bodies, so on and so forth.

While inside a planetary space cuboid, i could project the distant stars on the cubemap based on their luminosity. For instance bright stars would get rendered as bright spots on the cubemap, i think you get the point here.

I know that in order to overcome the floating point precision i would have to somehow scale the surrounding cuboids (spaces) in order to keep the depth buffer happy, this should be pretty easy, at least in theory.

Now the real problem is the transition of the camera from a high level space to a lower level space. Some interpolation should be used here in order to make it smooth so that when we get from the root cuboid inside a children cuboid objects should not just instantly scale up.

I was thinking to subdivide the universe to a point where a cuboid space could give me enough floting point precision to render it's content without any unpleasant surprises and use the logarithmic depth buffer in order to overcome z fighting at very large distances.

The interplanetary space cuboid would use doubles for positioning giving me precision ranging from 1 mm to  about1 trillion km, as Sean O'Neill states in one of his articles on the subject.

I'm sorry if i was not clear enough on some of the things i said here, its just that there's so many things inside my head right know, i can't really synthesize this into a better question.

Now, what do you people think of this? Is it doable, was it done before? What would be the challenges, the good and the bad?

Thank you for taking your time to read this.

#### Share this post

##### Share on other sites
Advertisement

My own experience may not be very relevant here, and I don't consider myself a top-level programmer, but I'll chime in anyway. Maybe it will give you some ideas - or at least eliminate some.

My own experience with this is limited to making a "to scale" version of our solar system. Even here, the floating point problems arise too.

I basically had 2 scales. A 'solar scale' and 'world' scale. The player would move around the 'world' at a very slow rate (like .0001 per second). The 'world' was an area that was at a scale that it wouldn't produce floating-point position errors. Each object in the world would then have a 'solar position/scale', which was it's real-world size (which also had to be scaled to a fraction of it's real value due to the massive numbers involved ).

When the player moved, that info would be relayed to objects to change their scale/position to make them appear to move.

Since I was only doing the solar system, I didn't really have to mess with LOD/luminosity, although at a small enough scale most objects would be too small to even render a single pixel. If I had continued the prototype, I would have swapped them out for 2d sprites at some point.

I should note that overall though, this wasn't a very good system. I'm using unity, but I immagine similar problems woud arise in any engine. Since I was using a non-standard scale movement, physics and collisions were useless since they don't scale well. Physics don't work well when you're applying super-tiny forces (like .00001 velocity) and collision and rendering also doesn't work right on ultra-large-scale objects either.

Another problem I ran into was my player's speed. When you're inbetween planets/objects, the ultra-low speed felt horribly slow, but when you were close to objects, the ultra-low speed was insanely fast. There was no way to make the speed small enough to be even vaguely realistic since it would run into floating point errors on the huge number of zeros.

Basically, I started out overly concerned errors of the too-large scale, but ran into them at the too-small scale :-\  I briefly flirted with the idea of a double-scaling system.. but I could only summon the vaguest idea of how to go about that.

Ultimately though, i dropped the idea because the solar system is fairly boring and 99.9% empty and the technical problems I was facing wouldn't have been worth fixing for a 99.9% empty gameworld.

Edited by SirWeeble

#### Share this post

##### Share on other sites

I've had some of the same aspirations. The problem is scale. I came up with a different vector system than the D3D vector. I use __int64 paired with a float. I have posted examples somewhere on this forum .... or was it another forum? Anyway, the idea is to use the __int64 as the top level of the vector and the float as the lower precision (0.0f-1000.0f). This will allow you to have 2^64 km worth of distance. You would have to write your own operators of course, but that's not a problem. Once everything (from galaxy size down to planet size and even further) is using this coordinate system, you can make a relative vector to the camera and render using D3D as normal. You WILL have to use scaling methods to show everything correctly....

This is a link to one of my posts:

http://www.gamedev.net/topic/649931-progress-on-my-current-game-no-name-yet/

I'm at work right now, so I'm not sure it's what I think it is.....

In general, I have a small number of stars around Sol (30,000) that can be explored. The only "transition" that I will have is traveling between star. This is not because the coordinate system can't handle the size, it simply gives me time to generate the textures (procedurally) for the solar system the player is traveling to. I have since that post been converting everything to DX11.

Your "universe model" sounds like a good way to go. There is no limit to how large your coordinate system is. You can make a __int64,__int64,float system that would give you 2^128km scale--that's freak'n HUGE! That would give you a distance of 3.6x10^20 of our galaxy side-by-side in each direction. If you did it further, you can't even imagine the scale.

Using your method, each "intergalactic space" could be the top __int64 and within the "interstellar space" would be the rest...... Man, that's not a bad idea you have.

#### Share this post

##### Share on other sites

>> I was thinking that if we can use Octrees for collision detection, frustum culling and raytracing, why can't we use this beautiful data

structure to solve the universe rendering problem.

odds are an octree is overkill for both collisions and culling.

not subdividing space would probably be simpler.

what you want to do is similar to what SIMSpace v8.0 already does.  the only real difference being SIMSpace uses a generic skybox and only renders nearby bodies. in your case, you'd just clear the screen to black (or whatever) then draw as many stars as you could see, and had time to draw. same idea as drawing stars in SIMSpace, but you draw stars beyond a certain range as just a pixel, with color determined by star color and brightness determined by range.

SIMspace uses a world coordinate system of quadrants, sectors, and meters. quadrants and sectors are stored in ints, and meters is stored in a float. 1 sector is 1 million meters across, and a quadrant is 1 million sectors across. so the coordinate system can go up to about +/-4 trillion quadrants or -4E24 though 4E24 meters.  that about 845 million light years across.

the game itself uses a space large enough to hold 1000 stars at the same average density as the milky way galaxy. which works out to a cube 14E17 meters across.

for drawing, simply divide range and scale by a constant, and lerp scale from the divided scale to zero as range goes from collision range to max visual range. max visual range is defined as the range at which you no longer bother drawing.  i'm using 100 times diameter as visual range for an object. in your case, visual range is where you stop drawing a sphere mesh and start drawing a pixel. so instead of drawing a sphere, you draw just one pixel at one color and one depth. and just like that, boom, you have realtime rendering of your stars database.

for the initial testing of the system, i used a star the size of the sun, and divide its size and scale by 100 million. the result is a unit sphere of about scale 7 drawn at ranges from 7 to 1400 d3d units. all of which is quite doable with just floats.

#### Share this post

##### Share on other sites

odds are an octree is overkill for both collisions and culling.

Strongly disagree, on what experience do you base that assumption?
I always have had big speed ups using octrees for both culling and collisions in practical cases.

Given the example we want to render the skybox, even at speeds higher than lighstpeed it would be enough to update just a few pixels (or blocks of pixels) per frame.
Octree is good to find (and sort) the necessary stuff quickly.

There are not just stars, there's also a lot of gas to render, raymarching octrees works well too.

For collision detection it's nice too,
and for simulation (if desired), using octrees to combine and approximate far sources af gravity is also common practice.

#### Share this post

##### Share on other sites

odds are an octree is overkill for both collisions and culling.

Based on my personal experience octrees are not just efficient but also practical. Their structure makes space division natural and intuitive, but these are just bonuses, performance is greatly increased when properly used.

Now it's all a matter of perspective i believe, i do agree that there may be many other techniques just as efficient as octrees, but as a programmer i always preferred intuitive algorithms and techniques because most of the times these are also fast.

Thank you!

#### Share this post

##### Share on other sites
I don't have personal experience with simulating a universe sized game environment but I'd start by representing the universe as a tree of elements of descending size.

If you look out at the real universe in great detail you'll see firstly a mass of galaxy clusters millions of light years across. So, first split your universe into a set of galaxy super clusters.

Only when the player enters a galaxy super cluster do you expand the next level of elements in the tree.

The next level is clusters of galaxies, which you expand and show to the player. Beyond that, galaxies, then star clusters, systems, planets, and surfaces. Only what's visited is expanded.

This would probably scale well. The player isn't ever going to explore the whole universe, not even a small fraction. The sense of scale is preserved as a huge illusion.

You can store your coordinates as a set of floating point vectors, e.g. clustervec:galaxyvec:systemvec:planetvec... Of course each vectors scale is relative to the size of its object. You wouldn't travel across a super cluster at 100mph, more like a thousand light years per second so measuring it in anything less than megaparsecs would be silly. If you needed to drop out of hyperspace to fight, you'd drop out in the nearest galaxy, system etc and the coordinates could be localised for slower speeds.

Hope this helps!

#### Share this post

##### Share on other sites

That is an awful lot of space.

Your scales are a good start but will need tuning based on actual scales.  If you are operating on a meter scale for the 'planet scale', an Earth-sized planet is about 6.4 million meters which blows your scale.

You mention building new galaxies procedurally and drawing a quick rendering of the brightest stars. How many stars do you have in a galaxy?  100 billion?  500 billion? Or something game-like?   Then you repeat for each galaxy, we've got over 100 billion galaxies in our observable universe.

Obviously rendering something lifelike on the order 10^22 stars for your background sky box is out of the question.

On the matter of quadtrees, when you are talking about such immense areas of sparsely-populated space you will be better off starting with hashes or similar for the enormous areas, probably hashing them again multiple times until you get down to clusters of interesting objects, then using an octree once you've got it pruned down small enough to no longer be sparse.

But still, having an illusion of infinity means both high repetition and sparse specific content. Few designs handle such things well.  An Earth-sized world is difficult enough to fill with interesting stuff.

#### Share this post

##### Share on other sites
For the scale issue, it might be useful to use custom >64 bit floating point arithmetic.
E.g. the physics lib i'm using (Newton) has this ('dgGoogol') at default 32bit exponent and 256bit mantissa.
I assume this would work, but i don't know what i'm talking about.

I don't share the worries about 'too much data'.
A procedural approach with minimal data per node (ID, mass, velocity, radiation) could generate a solar system or a galaxy on demand,
while being just a particle at distance.

I'd mostly worried about time. To make this intersting (imagine observing a galaxy collapsing in black holes), one would need to have a equation of motion for the whole universe,
that can calculate it's entire state at any point in time :P

#### Share this post

##### Share on other sites

>> Strongly disagree, on what experience do you base that assumption?

that you don't model such a large number of stars that you have to partition space to check collisions.

have you tried it yet? at what point (how many stars with how many planets per star on average and how many moons per planet on average) is an oct-tree required?  would it apply to my case of 1000 stars with 10 planets and 10 moons per?  or can i go 100,000 stars? or 100,000,000? when do i hit oct-tree requirements?  if you haven't tried it, it would seem to me to be pre-mature optimization.

#### Share this post

##### Share on other sites

• Advertisement
• ### Game Developer Survey

We are looking for qualified game developers to participate in a 10-minute online survey. Qualified participants will be offered a \$15 incentive for your time and insights. Click here to start!

• Advertisement

• ### Popular Now

• 11
• 15
• 21
• 26
• 11
• Advertisement
×

## Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!