Jump to content
  • Advertisement
Sign in to follow this  
_Silence_

Unigine 2 and its 64 bits coordinate precision

This topic is 556 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi everybody,

 

According to this, unigine provides

64-bit coordinates precision: real-world scale of virtual scenes as large as the solar system

 

Do some people have some more information about what they are using internally ? Mainly, I would like to know if they do it all in 64 bits (meaning that vertex positions are sent to the GPUs in 64 bits precision), or do they do any tricks (ie 64-bits precision at editing time, then do some split-tricks, then at runtime, the GPU operates with 32 bits position) ?

When reading further their website, it seems we can even see more farther than on usual engines and close and far things at the same time (see this page), letting me believe they might be using 64 bits floats without any tricks to send them as 32 bits to the GPU...

 

If anyone has some documents about how they manage this internally, please let me be aware of them.

 

Thanks.

Share this post


Link to post
Share on other sites
Advertisement

It's certainly some sort of virtual 64bit precision, two 32bit depth buffers or some such thing. A single 32bit buffer wouldn't give any benefit, it's already common to flip it for more precision over a range. And native 64bit is sloooooow even on most "professional" cards, let alone average consumer ones. Star Citizen does virtual 64bit, though I never learned the details I'd bet it's pretty much the same.

Edited by Frenetic Pony

Share this post


Link to post
Share on other sites
You don't need double precision for vertices on the GPU. Model-space vertex positions are usually close to origin, and final NDC coordinates are also close to origin. If you use a model->world matrix followed by a world->projection matrix then you'll end up in trouble because world-space coordinates are massive... So you just have to be sure to use a single model->world->projection matrix which has been calculated by the CPU in double precision before being truncated to single precision and sent to the GPU.

They also use 32bit log depth buffers, because the normal inverted float technique doesn't work in GL.

Share this post


Link to post
Share on other sites

They also use 32bit log depth buffers, because the normal inverted float technique doesn't work in GL.

It does if GL supports GL_ARB_clip_control extension (mandatory since OGL 4.5), where you can call glClipControl( whatever, GL_ZERO_TO_ONE ); to change the default depth range from [-1;1] to [0;1]

Share this post


Link to post
Share on other sites

It does if GL supports GL_ARB_clip_control extension (mandatory since OGL 4.5), where you can call glClipControl( whatever, GL_ZERO_TO_ONE ); to change the default depth range from [-1;1] to [0;1]

Last time I posted about that,
I thought somone corrected me to say that precision is lost before clipControl, but i just rechecked and they were actually talking about depthRange :o
So yes, you're right :D
here's a good blog post on it

I guess they must use the log depth technique so that they can hand-tweak the exponent per scene...

Share this post


Link to post
Share on other sites

It does if GL supports GL_ARB_clip_control extension
Whoa, it ain't supported in any GL 3 level hardware. 

Share this post


Link to post
Share on other sites

Thanks a lot for the replies. They are relevant.

So to recap, and to check if I understood correctly:

  • 64-bits projection/modelview CPU-side matrices sent as a single 32-bits mvp to the GPU
  • Most probably usual 24-bits depth buffer
  • Logarithmic depth to allow tweak if necessary
  • And probably 64-bits position precision on CPU-side

Share this post


Link to post
Share on other sites

Most probably usual 24-bits depth buffer

24bit integer depth buffers are unusual these days. 32bit float is normal. Most GPU's will store a 24-bit depth-stencil as a 32bpp depth texture (with 8bpp wasted), plus a completely separate 8bpp stencil buffer -- so there's no benefit over using a float32 depth format.

Share this post


Link to post
Share on other sites

 

Most probably usual 24-bits depth buffer

24bit integer depth buffers are unusual these days. 32bit float is normal. Most GPU's will store a 24-bit depth-stencil as a 32bpp depth texture (with 8bpp wasted), plus a completely separate 8bpp stencil buffer -- so there's no benefit over using a float32 depth format.

 

 

Thank you also for this. So it seems that offscreen rendering (threw FBOs) is really the way to do things nowadays.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!