Xna Z-order precision
Members - Reputation: 100
Posted 24 February 2012 - 07:19 PM
I have a game management system that handles distance culling, so I use a custom frustum to cull objects before rendering. I don't clip the far plane because my system considers objects visible if they actually are visible. I mean, I don't cull distance at all. If the object is big enough to be seen at its current distance, then it gets drawn. It handles LOD sorting and is very fast.
Anyway, just for fun, I set up a (actual size) Sun and Earth, and set them their actual distance apart. I used a 1:1 scale where the value 1.0 represents 1.0 meters. So I made the earth object 6M units (radius), the sun object 700M units, and set them 150B units apart.
My system handles everything fine: objects look pretty, no jitter (double precision), and such.
The problem is, the renderer is drawing the sun on top of the earth when the earth should obscure it. I'm guessing this is a z-order precision issue, but I'm not a rendering guy.
My guess is that there's nothing I can do about it, but I wanted to be sure.
Oh, I'm using Xna for rendering right now, but my system allows for any rendering codebase to plug in fairly easily, so I'm asking this specific to Xna but not to the exclusion of other rendering engines.
Members - Reputation: 10413
Posted 24 February 2012 - 07:37 PM
With a 32bit floating point value when you store a value of '1,000,000,000' the next value you can store is '1,000,000,064' as that is all the bits you have to spare. Consider then, with 8bits less and much greater values how little precision you are going to have in the z-buffer with those numbers
Worth reading; http://randomascii.wordpress.com/2012/02/13/dont-store-that-in-a-float/
Members - Reputation: 100
Posted 24 February 2012 - 09:17 PM
So then, is there a way to use a 32-bit buffer in Xna? it might not be good enough, but it's better. Or do I need to use something more robust than Xna?
What if I wanted to use a 64-bit buffer? Do graphics cards and/or APIs even allow that kind of thing?
(good link. doubles are the bomb lol. someday i'd like to try implementing an infinite-precision math package for the world data, but that'll probably require a new kind of hardware to do quickly enough for realtime rendering.)
Moderators - Reputation: 17547
Posted 25 February 2012 - 12:15 AM
No graphics hardware supports 64-bit depth values, and XNA does not expose the 32-bit floating point depth formats since it's based on D3D9. The common solutions to your problem are to partition the current viewable area into multiple depth ranges using a different near/far clip planes for each, or to use a logarithmic depth buffer. However the latter requires manually outputting depth from the pixel shader, which decreases performance. I'm also not sure if it would be able to handle such extreme precision requirements.