Jump to content
  • Advertisement

Archived

This topic is now archived and is closed to further replies.

Namethatnobodyelsetook

ZBuffer math flawed?

This topic is 5154 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

The Z buffer math is designed such that it''s exponential, giving a larger useful range to nearby objects. Great so far, but for distant objects they suffer from two side effects. First, and obvious, they''re on the flat part of the curve, where Z doesn''t change much, no matter how distant an object is. Second, because of how floating point numbers work, and the fact that depth values quickly approach 1.0f, you''re precision is going to be some number of bits, from the 0.5 bit. So, we''re causing the changes in Z to be small, while forcing those changes to be in just a few bits at the end of the float. If you stored Z data as 1.0f for near and 0.0f for far instead, but still exponential (1.0f - normalZ), you''d get precision near the front from the exponential nature, and you''d get precision near the back since you''re no longer tied to the 0.5 bit Textures should be fine, as they use 1/W. Fog, assuming it doesn''t use W, can be dealt with in a shader. The only problem is, what would a graphics card make of this data for Z? They only use 24 bits for Z, not 32... what do they store? Is it fixed point, or a smaller float? If it''s fixed point, this won''t help at all. Anyway, it''s just a thought that popped into my head. I haven''t done any tests or anything... It might not be new, it might be an old technique that lost favour for some reason. Any thoughts? Any card manufacturers care to comment on what you''re Z buffer implementation would do?

Share this post


Link to post
Share on other sites
Advertisement
What''s wrong with having less precision near the far plane? I could see maybe some techiques that use z-buffer data may get messed up. But to the user, if they''re that far away anyways, it''s not going to make much of a difference how precise it is.

Share this post


Link to post
Share on other sites
It''s not a problem that it gets less precise, it''s that we force it to be even less precise than necessary. Keep the exponential nature, favoring near objects, but at the same time, use the 0-1 range of floats in a way that doesn''t make Z for far objects incredibly imprecise.

One of the reasons that we don''t usually have long-range views, besides fillrate and transform time, is the Z buffering artifacts that show up.

Assuming video cards stored Z in a standard float format, having far be a Z of 0.0f would improve the Z precision of far objects, while not really affecting the precision of near objects. There appears to be an advantage, with no disadvantage.

Share this post


Link to post
Share on other sites
I see what you are saying. It''s an interesting point I had never thought of before. Are you sure that video cards use 0/1 for near/far internally? Also, anyone know how widely the W buffer is supported these days?

-Madgap

Share this post


Link to post
Share on other sites
Very little support is given for W-buffers any more. GeForceFX dropped support for it, and ATI never really had it. Mainly Intel and old(ish) NVidia parts did, with a few others.

I like pie.

Share this post


Link to post
Share on other sites

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!