• Advertisement
Sign in to follow this  

Unigine 2 and its 64 bits coordinate precision

Recommended Posts

Hi everybody,

 

According to this, unigine provides

64-bit coordinates precision: real-world scale of virtual scenes as large as the solar system

 

Do some people have some more information about what they are using internally ? Mainly, I would like to know if they do it all in 64 bits (meaning that vertex positions are sent to the GPUs in 64 bits precision), or do they do any tricks (ie 64-bits precision at editing time, then do some split-tricks, then at runtime, the GPU operates with 32 bits position) ?

When reading further their website, it seems we can even see more farther than on usual engines and close and far things at the same time (see this page), letting me believe they might be using 64 bits floats without any tricks to send them as 32 bits to the GPU...

 

If anyone has some documents about how they manage this internally, please let me be aware of them.

 

Thanks.

Share this post


Link to post
Share on other sites
Advertisement

It's certainly some sort of virtual 64bit precision, two 32bit depth buffers or some such thing. A single 32bit buffer wouldn't give any benefit, it's already common to flip it for more precision over a range. And native 64bit is sloooooow even on most "professional" cards, let alone average consumer ones. Star Citizen does virtual 64bit, though I never learned the details I'd bet it's pretty much the same.

Edited by Frenetic Pony

Share this post


Link to post
Share on other sites
You don't need double precision for vertices on the GPU. Model-space vertex positions are usually close to origin, and final NDC coordinates are also close to origin. If you use a model->world matrix followed by a world->projection matrix then you'll end up in trouble because world-space coordinates are massive... So you just have to be sure to use a single model->world->projection matrix which has been calculated by the CPU in double precision before being truncated to single precision and sent to the GPU.

They also use 32bit log depth buffers, because the normal inverted float technique doesn't work in GL.

Share this post


Link to post
Share on other sites

They also use 32bit log depth buffers, because the normal inverted float technique doesn't work in GL.

It does if GL supports GL_ARB_clip_control extension (mandatory since OGL 4.5), where you can call glClipControl( whatever, GL_ZERO_TO_ONE ); to change the default depth range from [-1;1] to [0;1]

Share this post


Link to post
Share on other sites

It does if GL supports GL_ARB_clip_control extension (mandatory since OGL 4.5), where you can call glClipControl( whatever, GL_ZERO_TO_ONE ); to change the default depth range from [-1;1] to [0;1]

Last time I posted about that,
I thought somone corrected me to say that precision is lost before clipControl, but i just rechecked and they were actually talking about depthRange :o
So yes, you're right :D
here's a good blog post on it

I guess they must use the log depth technique so that they can hand-tweak the exponent per scene...

Share this post


Link to post
Share on other sites

Thanks a lot for the replies. They are relevant.

So to recap, and to check if I understood correctly:

  • 64-bits projection/modelview CPU-side matrices sent as a single 32-bits mvp to the GPU
  • Most probably usual 24-bits depth buffer
  • Logarithmic depth to allow tweak if necessary
  • And probably 64-bits position precision on CPU-side

Share this post


Link to post
Share on other sites

Most probably usual 24-bits depth buffer

24bit integer depth buffers are unusual these days. 32bit float is normal. Most GPU's will store a 24-bit depth-stencil as a 32bpp depth texture (with 8bpp wasted), plus a completely separate 8bpp stencil buffer -- so there's no benefit over using a float32 depth format.

Share this post


Link to post
Share on other sites

 

Most probably usual 24-bits depth buffer

24bit integer depth buffers are unusual these days. 32bit float is normal. Most GPU's will store a 24-bit depth-stencil as a 32bpp depth texture (with 8bpp wasted), plus a completely separate 8bpp stencil buffer -- so there's no benefit over using a float32 depth format.

 

 

Thank you also for this. So it seems that offscreen rendering (threw FBOs) is really the way to do things nowadays.

Share this post


Link to post
Share on other sites

 

Thanks a lot for the replies. They are relevant.

So to recap, and to check if I understood correctly:

  • 64-bits projection/modelview CPU-side matrices sent as a single 32-bits mvp to the GPU
  • Most probably usual 24-bits depth buffer
  • Logarithmic depth to allow tweak if necessary
  • And probably 64-bits position precision on CPU-side

 

inverted Z buffer is way more practical than a logarithmic depth buffer.

Share this post


Link to post
Share on other sites

Hi everybody,

 

According to this, unigine provides

64-bit coordinates precision: real-world scale of virtual scenes as large as the solar system

 

Do some people have some more information about what they are using internally ? Mainly, I would like to know if they do it all in 64 bits (meaning that vertex positions are sent to the GPUs in 64 bits precision), or do they do any tricks (ie 64-bits precision at editing time, then do some split-tricks, then at runtime, the GPU operates with 32 bits position) ?

When reading further their website, it seems we can even see more farther than on usual engines and close and far things at the same time (see this page), letting me believe they might be using 64 bits floats without any tricks to send them as 32 bits to the GPU...

 

If anyone has some documents about how they manage this internally, please let me be aware of them.

 

Thanks.

 

On the CPU side we use double precision (64-bit) floats for coordinates. Meshes are stored with single precision of coordinates, while thier pivot transforms are in doubles as well.

On the GPU side most of the stuff is in single (or even half) precision because of virtual camera offset, while some calculations are done in doubles (especially when dealing with Earth curvature in geoid mode).

Depth buffer precision is 32 bits with some tricks to improve precision.

Handling large worlds is tricky, you also need to take care about geo coordinates: http://unigine.com/en/products/engine/unbounded-world

Edited by binstream

Share this post


Link to post
Share on other sites

 

Hi everybody,

 

According to this, unigine provides

64-bit coordinates precision: real-world scale of virtual scenes as large as the solar system

 

Do some people have some more information about what they are using internally ? Mainly, I would like to know if they do it all in 64 bits (meaning that vertex positions are sent to the GPUs in 64 bits precision), or do they do any tricks (ie 64-bits precision at editing time, then do some split-tricks, then at runtime, the GPU operates with 32 bits position) ?

When reading further their website, it seems we can even see more farther than on usual engines and close and far things at the same time (see this page), letting me believe they might be using 64 bits floats without any tricks to send them as 32 bits to the GPU...

 

If anyone has some documents about how they manage this internally, please let me be aware of them.

 

Thanks.

 

On the CPU side we use double precision (64-bit) floats for coordinates. Meshes are stored with single precision of coordinates, while thier pivot transforms are in doubles as well.

On the GPU side most of the stuff is in single (or even half) precision because of virtual camera offset, while some calculations are done in doubles (especially when dealing with Earth curvature in geoid mode).

Depth buffer precision is 32 bits with some tricks to improve precision.

Handling large worlds is tricky, you also need to take care about geo coordinates: http://unigine.com/en/products/engine/unbounded-world

 

 

Thanks for sharing this !

Share this post


Link to post
Share on other sites

 

It does if GL supports GL_ARB_clip_control extension
Whoa, it ain't supported in any GL 3 level hardware. 

 

You sure about that? DirectX uses the equivalent of GL_ZERO_TO_ONE, so hardware support is likely present. Driver support is a different matter though.

Share this post


Link to post
Share on other sites

My engine has a Position class which represents a 3D position in world space and internally uses doubles/__m128d simd registers, with utilities to add/subtract Position(s) and compute relative vectors in float coordinates. The Transform component contains it and consequently all model instances, cameras, sound sources, AI agents, particle effects etc take advantage of it.

At the beginning of each frame, a new world origin is chosen, which usually coincides with the position of the main camera. All Transform components are then converted to float precision = Transform.position.Subtract(worldOrigin). Frustum culling, rendering, water simulation, AI and other simulation tasks then operate in this float-based relative world space. It's an elegant approach and it works very well. To give an example, I am working on a new demo with an island located very far from the zero origin (longitude: 354750, latitude: 3703690) and everything works perfectly.

For the depth buffer, I am now using a reversed depth buffer and it magically fixed all the z-fighting artifacts I was having.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Advertisement