Xna Z-order precision

Started by
3 comments, last by OmegaEnabled 12 years, 2 months ago
Hi,

I have a game management system that handles distance culling, so I use a custom frustum to cull objects before rendering. I don't clip the far plane because my system considers objects visible if they actually are visible. I mean, I don't cull distance at all. If the object is big enough to be seen at its current distance, then it gets drawn. It handles LOD sorting and is very fast.

Anyway, just for fun, I set up a (actual size) Sun and Earth, and set them their actual distance apart. I used a 1:1 scale where the value 1.0 represents 1.0 meters. So I made the earth object 6M units (radius), the sun object 700M units, and set them 150B units apart.

My system handles everything fine: objects look pretty, no jitter (double precision), and such.

The problem is, the renderer is drawing the sun on top of the earth when the earth should obscure it. I'm guessing this is a z-order precision issue, but I'm not a rendering guy.

My guess is that there's nothing I can do about it, but I wanted to be sure.

Oh, I'm using Xna for rendering right now, but my system allows for any rendering codebase to plug in fairly easily, so I'm asking this specific to Xna but not to the exclusion of other rendering engines.

Cheers,
Dave
Advertisement
The z-buffer will, more than likely, be a 24bit floating point value... to put it simply; you ran out of precision so the z-test is likely "failing" (in that it passes) and the objects simply show up in the order they are drawn.

With a 32bit floating point value when you store a value of '1,000,000,000' the next value you can store is '1,000,000,064' as that is all the bits you have to spare. Consider then, with 8bits less and much greater values how little precision you are going to have in the z-buffer with those numbers :)

Worth reading; http://randomascii.wordpress.com/2012/02/13/dont-store-that-in-a-float/
Thanks, that's concise and confirms what I suspected.

So then, is there a way to use a 32-bit buffer in Xna? it might not be good enough, but it's better. Or do I need to use something more robust than Xna?

What if I wanted to use a 64-bit buffer? Do graphics cards and/or APIs even allow that kind of thing?

(good link. doubles are the bomb lol. someday i'd like to try implementing an infinite-precision math package for the world data, but that'll probably require a new kind of hardware to do quickly enough for realtime rendering.)
Actually the most common depth buffer format is 24-bit integer. However the precision is still very non-linear for perspective projections, which means you can run into some big precision problems if there are several orders of magnitude between your near and far clipping planes. Using a floating point format for depth can actually make your precision worse, since the non-linearity of floating point numbers combines with the non-linearity of perspective z/w. However if you flip the near and far planes those two things somewhat cancel each other, giving you a linear-ish distribution of precision. I ran some tests here if you're interested.

No graphics hardware supports 64-bit depth values, and XNA does not expose the 32-bit floating point depth formats since it's based on D3D9. The common solutions to your problem are to partition the current viewable area into multiple depth ranges using a different near/far clip planes for each, or to use a logarithmic depth buffer. However the latter requires manually outputting depth from the pixel shader, which decreases performance. I'm also not sure if it would be able to handle such extreme precision requirements.
Wow, that's a lot to digest. I get most of it, but once you get to using multiple clip planes -- woosh! If I had any hair, it would be blowing in the wind right now.

But my question is officially answered. Thanks!

This topic is closed to new replies.

Advertisement