Jump to content
  • Advertisement
Sign in to follow this  
Norman Barrows

max size for level using floats

This topic is 1519 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

whats a good max size for a level, given the limitations of floats?


we get 7 digits to work with, so i'm thinking 1 million d3d units max.  or is that too big?  IE will it cause problems?


i'm starting to work on a new version of my starship flight sim, and i'm wondering how large a d3d coordinate system i can use for drawing the scene.


i'm thinking of defining the world as follows:

scale: 1 d3d unit = 1 meter.

1 sector is 1 million meters across

1 quadrant is 1 million sectors across

the game world is a cube 200,000 quadrants across, containing 1000 star systems.


a location in a sector would be specified by d3d coordinates (0 to 1 million) stored in floats.


will this sector (level) size of 1 million d3d coordinates across be too big?


in my airship sim, i use 10,000 d3d coordinates as the max size of a rendering "chunk" or "cell" or "level".


in my caveman sim, i use 26,400 d3d coordinates as the max size of a rendering "chunk" or "cell" or "level".






Share this post

Link to post
Share on other sites

if you use 64 bit floating numbers(i.e double) that range will likely be fine, but using 32 bit floats is going to cause you to have precision problems at that scale.


dx9 only support floats, not double.


i suspected it might be pushing it. special code would be needed to calculate true 2d and 3d distances with full precision.


but assume that special code is written so the physics doesn't blow up. 


also assume that the near and far planes will be set multiple times per frame to keep zbuf resolution high.


so is 100,000 still to big?


whats the max level size in popular game engines? 50K?

Share this post

Link to post
Share on other sites

You wont be able to specify the position with much precision on the edges of your star system. The limiting factor here is the fraction part of the floating point number. Your universe is about 2^58 meters across. A double has 53 bits of precision in the fractional part. This means, at the edges of the universe, you will only be able to represent the position of a ship in increments of 2^5, or about 32 meters. Floats are even worse, you can only have 24 bits of precision. This is about a precision of 2^34 meters or about 16 billion meters. Not good.


I would use 64 bit integers to store the galactic coordinates of objects in your universe. That would give you precision to about 2^-6 or about a precision of two centimeters. Of course, when rendering your scene, you wont be able to work in integers. So whenever you are rendering anything, you need to convert all of your coordinates to a local origin, then convert all active object coordinates to floats that are relative to the local origin.


EDIT: fixed computation error

Edited by HappyCoder

Share this post

Link to post
Share on other sites

When you're rendering, just stick near the origin, and move the universe to you.


With regards to map data, load quadrants of reasonable size as you approach them, and just keep the surrounding quadrants loaded at any given time.  If you need to see distant objects, render them into your sky box for each quadrant, and interpolate between skyboxes as you move over great distances.

Share this post

Link to post
Share on other sites

The best option is to simply try it.


i was hoping to come across a rule of thumb, such as +/- 500,000 units gives you precision of 0.1 units.


a precision of 0.1 units (10cm) will suffice for my purposes.


so with 7 digits, plus mantissa, and sign bit, i could go +/- 1,000,000 (999,999)  units with a precision of 0.1 units, correct?


i was thinking it would be easier if the highest resolution coordinates for an object (x,y,z location in a sector / chunk / level) were floats, so they would translate directly to d3d units, rather than using something like int64s for world coordinates.


BTW, using an int 64 for quadrillions of meters, and second int 64 for decimeters, you get a coordinate system with 10cm resolution, 2.5 times as wide as the galaxy (and that's just in the first octet).


are int64's supported on 32 bit apps? they are, right?


well, irregardless of how the world coordinates are stored, eventually they get converted into a float based d3d coordinate system, which still needs to have its upper/lower bounds defined. ie anything beyond say 500,000 units from the camera will require special handling.


FYI, i've calculated the far clip range for a large star of 2000 million d3d units diameter as being roughly 200K million d3d units. this is using a scale of 1 d3d unit = 1 meter.


so, anyone out there have suggestions on what those bounds should be?


slicer says 1,000,000 is too big... slicer: have you tried that?


Depends on your units.


1 d3d unit = 1 meter. x,y,z float values range from -500,000 to 500,000. that gives me a precision of 1 decimal place = 0.1 units = 10 cm, correct?


one decimeter precision will suffice for my needs.



TL;DR - you can always scale distant objects down and move them closer, to get them within the render volume.


already got that part figured out. just need to define how big the d3d level/area/chunk/sector is, in order to determine what is "distant".  for distant objects, divide the range by the level size to get the divisor. then divide the range AND scale by the divisor. this maps and scales down objects that are beyond the level, into the level.


also need to define a rule of thumb for far clip distance. i was thinking of using 100 times the object size. IE a person is 2 meters tall, so you draw them starting at a range of 200m, at which point they will have an apparent height onscreen of about 1cm.


the idea there is that an object 50 units across will have an apparent onscreen size of ~50 units at a range of 1 unit, and an apparent onscreen size of ~1 unit at a range of 50 units. with a scale of 1 unit= 1 meter, if we decide that we want to start drawing when an object is 1cm (0.01 units) in apparent screen size, then we need to zoom out to 100x the size of the object, or 5000 units for an object 50 units in size. does this sound correct?


so really at this point, i need to figure out the level size, and the clip range formula. in the article mentioned above, they said a float only has 6 digits of precision, not seven. that would mean a level size of +/- 99,999 or so with 1 decimal precision.


i'm trying to remember what sizes i've used in the past for different things, and whats the largest i've used...  as i recall, distant galaxy billboards were drawn inside the skybox of Gamma Wing at a range of 100,000 units (maybe?).


i suppose i could play it safe and just use something like +/- 50,000 units. (100K units across, instead of 1M units across) and start with cliprng=100*object_size, and see how it goes...


so a sector would be 100Km across, and a quadrant would be 1B sectors across (1E14 m)  and the world map would be a cube 2000 quadrants across (2E17 m, or 2000 quadrillion meters).


location in a sector would fit in a float, with 1 or 2 decimal places of accuracy.  sector and quadrant indices could fit in ints.  might be able to lose quadrants, and store sectors in int64s. again the question of int64 support in 32 bit apps.

Share this post

Link to post
Share on other sites
Sign in to follow this  

  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!