Jump to content
  • Advertisement
Sign in to follow this  
Brad Sweet

Alternative ZBuffer Schemes

This topic is 5406 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi, Since my earlier post didn't yield an answer as I hoped, I thought I'd try a different tack. Are there any alternate techniques to the standard way the z buffer is populated that make it possible to maintain a (more) consistent level of precision for scenes where there are objects close to the camera as well as ones far away? Thanks again, in advance, for any thoughts. Brad Sweet

Share this post


Link to post
Share on other sites
Advertisement
Moving the near Z clip plane further away from the camera makes the range in the Z buffer far more linear. Or at least, that's what Jim Blinn reported in his "W pleasure, W fun" article in Jim Blinn's corner (re-printed in Notation, Notation, Notation and A Trip Down the Graphics Pipeline IIRC.)

Obviously the downside of that is clipping that far away is unacceptable for certain scenes.


I suspect the method you've already tried of unprojecting Z back into homogenous space or using W are the only ways to do this "properly".

but... the Z and W values of the post shader transformed vertex are what are used to a) write to the z buffer, and b) make it perspective correct respectively -- so screwing around with those too much is going to screw around with rendering in general.

Additionally hardware schemes such as hardware Z compression are likely to be upset when you do non-standard/out of range things they weren't expecting. You may be able to interfere with hardware Z compression on some chips by enabling alpha testing for all geometry - but that'd be a horrible hack rather than anything robust.


If accuracy is more important than performance, you could go for full detail impostors for far away geometry and render those with the near clip plane set to the camera space position of each impostor so you get full precision - you'll probably need to clear Z for each too.


Sorry I don't have any better ideas/news - but emulating a W buffer is never anything I've needed to try, and I suspect not many others have either; having most of the Z precision near the camera is preferable for most games (that's where the action is, and you're using lower-LOD for far away things anyway). I suspect that's why most IHVs didn't adopt 3Dfx's W buffers and eventually they died out, and why there's no "obvious" workaround to get them back.

Share this post


Link to post
Share on other sites
Not sure if this makes sense, but could you sort your geometry so that it is in two buckets, the "far away" bucket and the "near bucket". Clear the Z, set up your projection with the near clip plane being quite far back, and rendering your far bucket. Once finished, clear your Z, setup the projection with a closer near, and a closer far, and draw your near bucket.

Who knows what problems might come out of this, having never done it before, but maybe it might help?

Share this post


Link to post
Share on other sites
Thanks for the quick and informative responses!

I'm currently using a variation of the "bucket" approach. It's adequate, but I was hoping that I could find something a little more robust. I have several selectable camera views (with zoom) and managing the various z-range-scenarios is a tad tedious.

Part of my problem is finding D3DRS_DEPTHBIAS values (for shadow volumes) that work consistently as I move within my "world". I figured with a "linear"-Z buffer this would be easier to control.

As an aside, I received some feedback from ATI support on the linear Z stuff and they indicate the anomalies I have (ref: my post two days ago) are likely because I'm computing my initial Z in eye-space but that the h/w interpolates Z in screen space. They suggested that I could pass my intial z to the pixel shader as a texcoord and write it to the z-buffer there. But... as "S1CA" hinted, ATI says this would disable their HyperZ support and be a performance hit.

Alas, I guess I'll just have to use the normal approaches... for now...

Thanks again for the ideas,
Brad Sweet

Share this post


Link to post
Share on other sites
Quote:
Original post by Brad Sweet
...

Part of my problem is finding D3DRS_DEPTHBIAS values (for shadow volumes) that work consistently as I move within my "world". I figured with a "linear"-Z buffer this would be easier to control.

...



Try setting D3DRS_DEPTHBIAS as follows:

D3DRS_DEPTHBIAS = -slots/((2^zbd)-1)


Where:

zbd = the bit depth of your Z buffer: 16, 24 or 32 depending on which format you set.

slots = the number of discrete z buffer "slots" to bias forward toward the camera.

Share this post


Link to post
Share on other sites
Quote:
Original post by Brad Sweet
Thanks for the quick and informative responses!

I'm currently using a variation of the "bucket" approach. It's adequate, but I was hoping that I could find something a little more robust. I have several selectable camera views (with zoom) and managing the various z-range-scenarios is a tad tedious.

Part of my problem is finding D3DRS_DEPTHBIAS values (for shadow volumes) that work consistently as I move within my "world". I figured with a "linear"-Z buffer this would be easier to control.

As an aside, I received some feedback from ATI support on the linear Z stuff and they indicate the anomalies I have (ref: my post two days ago) are likely because I'm computing my initial Z in eye-space but that the h/w interpolates Z in screen space. They suggested that I could pass my intial z to the pixel shader as a texcoord and write it to the z-buffer there. But... as "S1CA" hinted, ATI says this would disable their HyperZ support and be a performance hit.
Brad Sweet

you can change your z comparison function to less instead of less equal when rendering the shadow volume.this will remove the need for depth bias.
regarding the hyperz, there are some actions that can stop it (one from these actions is the use of depth fail shadow volumes) so if you are already using any of these things, don't care for the hyper z as it is already stoped.The things that stops the hyper z as stated by ati are:
-changing the z comparison function during the frame rendering.for example rendering a part of the frame with D3DCMP_LESS and another part with D3DCMP_GREATER.
-The use of D3DCMP_EQUAL or D3DCMP_NOTEQUAL depth comparison functions.
-outputting depth values from pixel shaders.
-using stencil fail and stencil depth fail operations.
-the use of TEXKILL shader instruction or alpha test.

Share this post


Link to post
Share on other sites
The extra insight is much appreciated!

The -slots/((2^zbd)-1) makes perfect sense since it's added to the z-buffer value after the w-divide (...according to ATI; I assume nVidia is the same).

But...
Early on, I tried the D3DRS_DEPTHBIAS algorithm you presented without much success (shadow artifacts still periodically "popped through" the actual object). Consequently, I gave up and just iterated on D3DRS_DEPTHBIAS until the problem went away, but without any deterministic value for the bias (I was reduced to trial and error).

I was thinking about Simon's post last night and realized my failure may have been related to using shadow-generating objects that were less complicated (simpler geometry) than the actual object, therefore likely invalidating the direct approach you suggest. I'm going to have to fiddle with this (as well as Mohamed's suggestions) some more and see if I can converge on a more robust shadow approach... to date it's been a pain!

Thanks again for both your inputs,
Brad






Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!