Unigine 2 and its 64 bits coordinate precision

Started by
14 comments, last by Reitano 6 years, 10 months ago

Hi everybody,

According to this, unigine provides

64-bit coordinates precision: real-world scale of virtual scenes as large as the solar system

Do some people have some more information about what they are using internally ? Mainly, I would like to know if they do it all in 64 bits (meaning that vertex positions are sent to the GPUs in 64 bits precision), or do they do any tricks (ie 64-bits precision at editing time, then do some split-tricks, then at runtime, the GPU operates with 32 bits position) ?

When reading further their website, it seems we can even see more farther than on usual engines and close and far things at the same time (see this page), letting me believe they might be using 64 bits floats without any tricks to send them as 32 bits to the GPU...

If anyone has some documents about how they manage this internally, please let me be aware of them.

Thanks.

Advertisement
Looks like c++ double type

It's certainly some sort of virtual 64bit precision, two 32bit depth buffers or some such thing. A single 32bit buffer wouldn't give any benefit, it's already common to flip it for more precision over a range. And native 64bit is sloooooow even on most "professional" cards, let alone average consumer ones. Star Citizen does virtual 64bit, though I never learned the details I'd bet it's pretty much the same.

You don't need double precision for vertices on the GPU. Model-space vertex positions are usually close to origin, and final NDC coordinates are also close to origin. If you use a model->world matrix followed by a world->projection matrix then you'll end up in trouble because world-space coordinates are massive... So you just have to be sure to use a single model->world->projection matrix which has been calculated by the CPU in double precision before being truncated to single precision and sent to the GPU.

They also use 32bit log depth buffers, because the normal inverted float technique doesn't work in GL.

They also use 32bit log depth buffers, because the normal inverted float technique doesn't work in GL.

It does if GL supports GL_ARB_clip_control extension (mandatory since OGL 4.5), where you can call glClipControl( whatever, GL_ZERO_TO_ONE ); to change the default depth range from [-1;1] to [0;1]

It does if GL supports GL_ARB_clip_control extension (mandatory since OGL 4.5), where you can call glClipControl( whatever, GL_ZERO_TO_ONE ); to change the default depth range from [-1;1] to [0;1]

Last time I posted about that,
I thought somone corrected me to say that precision is lost before clipControl, but i just rechecked and they were actually talking about depthRange :o
So yes, you're right :D
here's a good blog post on it

I guess they must use the log depth technique so that they can hand-tweak the exponent per scene...

It does if GL supports GL_ARB_clip_control extension
Whoa, it ain't supported in any GL 3 level hardware.

"I AM ZE EMPRAH OPENGL 3.3 THE CORE, I DEMAND FROM THEE ZE SHADERZ AND MATRIXEZ"

My journals: dustArtemis ECS framework and Making a Terrain Generator

Thanks a lot for the replies. They are relevant.

So to recap, and to check if I understood correctly:

  • 64-bits projection/modelview CPU-side matrices sent as a single 32-bits mvp to the GPU
  • Most probably usual 24-bits depth buffer
  • Logarithmic depth to allow tweak if necessary
  • And probably 64-bits position precision on CPU-side

Most probably usual 24-bits depth buffer

24bit integer depth buffers are unusual these days. 32bit float is normal. Most GPU's will store a 24-bit depth-stencil as a 32bpp depth texture (with 8bpp wasted), plus a completely separate 8bpp stencil buffer -- so there's no benefit over using a float32 depth format.

Most probably usual 24-bits depth buffer

24bit integer depth buffers are unusual these days. 32bit float is normal. Most GPU's will store a 24-bit depth-stencil as a 32bpp depth texture (with 8bpp wasted), plus a completely separate 8bpp stencil buffer -- so there's no benefit over using a float32 depth format.

Thank you also for this. So it seems that offscreen rendering (threw FBOs) is really the way to do things nowadays.

This topic is closed to new replies.

Advertisement