# Dealing with huge open spaces (universes)?

This topic is 3999 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

Hello gamedevs, I'm currently working on my first rendering engine and got stuck on how to efficiently solve the problem given in thread title. Specifically here I'm wondering how to handle positioning, and rendering at those positions, in universes that span hundreds millions of kilometers yet to have good and preferably fixed precision at any point (I'm aiming between micrometer and millimeter precision here). I can't see how this can be done with a standard 32-bit float data type so the most straight-forward thing would be to switch from "floats" to "doubles" in order to use double's higher precision to alleviate the problem. However, this does not solve the problem completely as the double also has its limited range and, more importantly, there is still an issue of variable precision which will again cause problems at the "edges" of a universe which are now just further away than with a float. Some suggest switching to integer domain (64-bit or even 128-bit) to handle these problems like in the article here, "A matter of precision" on "http://home.comcast.net/~tom_forsyth/blog.wiki.html", which sounds reasonable to me. I can accept the idea to handle positions of a player/camera in large integers and do the movement deltas in floating point but what confuses me is that ultimately a scene at some point in universe has to be sent to graphic hardware as stream of vertices in floating point format but conversion from large 64-bit integer values even to doubles (space-to-space transformations) will have some serious rounding errors. I do not want to switch to doubles if there is any possibility to get away with integers+floats. Bottom line is I don't understand how to handle this problem, and have possibly misunderstood the proposed solution with integers, so if any of you good souls has any advice to give and point me in right direction, I'm all ears :) Thanks.

##### Share on other sites
Split up the whole place into grid regions? Each has a local position. I believe dungeon siege did something similar. I'd imagine you are limiting sight? Then again you'd run into a problem of grid regions with nothing. Not sure how memory friendly it would be.

##### Share on other sites
What exactly are you trying to pinpoint? That accuracy might be possible (though, I seriously doubt any modern computer could possibly store all that data), but utterly pointless. I did some research a while back for a science fiction story. If you're trying to draw even just every star in the universe, then you're drawing a good 100,000,000,000,000,000,000,000 stars (I can't remember the exact figures I used to get this estimate, but google search the number of stars in our galaxy and multiply it [ours is small] by some small number [1.x], then by the number of all the known galaxies, then round down). You can't possibly store all that information, let alone to the degree of accuracy you're seeking.

If you're on the galactic scale, a few light years wont make a difference, let alone micrometers. Even then there are an extreme amount.

You should note that there is no way to know the exact position of anything at any great distance in space (or anything, if you want to get much too technical), so no matter how accurate your scale is, it wont be "real." Also, one would not be able to tell if something were a number of kilometers off target at any great distance, let alone meters, so that scale is again unnecessary.

Honestly you need be no more accurate than astronomical units (AU), and even then you will be facing horribly complex methods of compressing locations in space.

##### Share on other sites
From a logical standpoint, and from that of making a game, trying to represent all at once an entire universe is completely unreasonable and unnecessary, but there are ways of doing it. The best approach would be procedural, which would allow you to generate static data on-the-fly and at a variety of scales. You could subdivide dynamic objects according to a variable-sized partitions. But, seriously, this is a waste of your time.

There is only one really viable alternative, and that's to individually design each significant region and ignore the space in between. Freelancer presents a very good example of this, with each star system being detailed down to the meteor level, while providing jump gates to speed travel between systems, thereby trimming all the boring, unnecessary parts. You still have to travel between planets and intra-system phenomena (like asteroid fields).

Consider that, as a practical matter, you would not be able to see anything more than a few hundred kilometers away at best, and it's likely you would see far less than that. Only planets, stars, and other massive cosmic features are visible from millions of kilometers; ships would be invisible at that distance, even with scanning devices (unless they are preternaturally accurate). A localized approach is probably your best bet from any standpoint. Seriously, who in his right mind would want to travel through all that empty space? Even Freelancer, with its hand-crafted star systems, jump lanes, and greatly exaggerated scale, bored me to death in its vastness.

##### Share on other sites
Just a principle that should work: Following the answers above, only a (more or less) small part of the universe should be rendered at a time. Then objects in the universe need not be given totally in the maimum numeric resolution. Instead, their position may need to be defined using that big integers, but perhaps their vertices don't need so. Now, after subtracting the (big integer) position of the cam from the (big integer) position of an object, and determining that the object is inside the (said more or less) near range of visibility, then the above subtraction has left a local co-ordinate system (i.e. camera CS) where e.g. float is sufficient. Since rendering is done in this local CS only, vertices with e.g. float resolution are sufficient. Things too far away may just be rendered using bill-boards or similar stuff.

(Since I haven't tried to implement such a thing yet, there may be pitfalls I've not seen from this theoretical point-of-view, of course.)

##### Share on other sites
You should never have to specify geometry at greater precision than that provided by floats (in video games, at least). Just make everything in a region keep track of its position relative to the origin of that region, which would be specified by a double or a long double or a big integer or whatever you want to use. The camera position is also specified relative to the region's origin.

Make the regions overlap a bit, and when an object passes completely into a different region, it becomes associated with that region and its position is recalculated. If parts of a region other than the one the camera is in need to be drawn, just compute the camera's position relative to that region.

Open space can be filled with a regular grid of regions, which only exist when they are needed. But space is a big place...

##### Share on other sites
I agree with Vorpy. To what end does so much precision become useful? If you're drawing a planet a million miles away, it isn't going to look any different if it is (imprecisely) a million and one miles away.

##### Share on other sites
Quote:
 Original post by ZipsterCheck out The Continuous World of Dungeon Siege for some ideas.

Great, thanks. This article was right on the spot. The concept presented there, "there is no (unified) world space", is just about what I was thinking under "getting away with integers and floats".
While this does allow "endless" spaces with floating point precision issues constrained to only a single segment, it works well basically only with that type of game. Under this I mean:
- an environment that is not highly dynamic
- no need for high-velocity movements and fly-overs
- problems with "teleporting"
Second thing I noticed is that they use a fixed "up" direction meaning in fact that this system is not applicable, or perhaps with many difficulties, to planets (spheric worlds).
In general it is a really nice concept but with this implementation it's tailored for a different game genre than I'm aiming for.
Quote:
 by ZouflainIf you're on the galactic scale, a few light years wont make a difference, let alone micrometers. Even then there are an extreme amount.

When looking at something from such a great distance, such fine precision really does not matter, you're correct. But, if I were to "fly" over to that distant place I would want to have the same (or very similar) precision as at the place that I started from.

From all your posts until now I can draw a conclusion that really the only reasonable way to handle this issue is to use segmented space. Haegarr summed this nicely. I could use integers to handle absolute universe coordinates and use floats in a single segment. Now, how big can this segment be? I'm thinking that if I am to accept 1mm precision at outer edges, this segment (box) could be 8km in each direction. With 1 cm it could be 80 km but this as far as I'm willing to accept. Discussion on units of measurement covered here.
Quote:
 by Vorpy Open space can be filled with a regular grid of regions, which only exist when they are needed. But space is a big place...

Indeed, it is big which leads to a problem Sirisian stated
Quote:
 by Sirisian Not sure how memory friendly it would be.
if such small segments are used. Left to be seen...
Quote:
 by TomFrom a logical standpoint, and from that of making a game, trying to represent all at once an entire universe is completely unreasonable and unnecessary, but there are ways of doing it.

Exactly, and my goal was to find out what are possible solutions, both sane and insane ones. I am writing my final exam on the topic of 3D real-time renderers with support for open terrains/spaces and as one sub-topic I need to identify what would be theoretical and practical limitations if such renderer was to be taken to the extreme, meaning, "entire" universe and also what would be possible and eventually feasible solutions for it. I am also supposed to roughly calculate approximate memory/processing requirements for each solution and define its pros and cons.

For the practical part of this work I am to implement a relatively simple prototype, a showcase, for such a renderer which is required to cope with a plane with procedurally generated terrain (ultimate goal is 1000x1000 km). Solution for planets needs to be documented only, not implemented.
As a side question, does anyone have any idea how ROAM algorithm would be able to cope with a segmented space solution?. Since it works on the principle of triangle subdivisions, and two triangles make a quad which is after all the base for each segment, it should work in a single segment (lets say terrain patch for my prototype) but would it be able to cope with multiple segments (patches)? I guess some seam stitching will be required :-/.

Thanks for all your replies so far.

##### Share on other sites
Quote:
 Original post by TomEven Freelancer, with its hand-crafted star systems, jump lanes, and greatly exaggerated scale, bored me to death in its vastness.

Maybe slightly off-topic, but how is that possible?!? IMO the Freelancer "universe" was far too small, rather than too vast.

Slightly more on-topic: It actually is possible to create a whole universe on a quite detailed scale, as long as it's done procedurally. for example look at this game: http://www.fl-tw.com/Infinity/

1. 1
Rutin
24
2. 2
3. 3
4. 4
JoeJ
18
5. 5

• 14
• 19
• 11
• 11
• 9
• ### Forum Statistics

• Total Topics
631760
• Total Posts
3002176
×