• Advertisement
Sign in to follow this  

why depth bias has slopescale parameter?

This topic is 4000 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

as we know,depth bias(dx)/polygon offset(gl) has 2 parameters: slope factor:with relationship to max(abs(delta z / delta x), abs(delta z / delta y)); bias facotr:the bias in depth; and the result is like this:(copy from dx sdk) Offset = m * D3DRS_SLOPESCALEDEPTHBIAS + D3DRS_DEPTHBIAS well,I wonder why we need the slope parameter? why not just modify the depth value with bias factor and without the slope factor; I didn't get result from google,^_^; can anyone confirm if my opinions is right? And my opinion is: the fragment comes from the rasterize of the polygon; so different slope_scale situation of polygon generate different fragment; so when max(abs(delta z / delta x), abs(delta z / delta y)); is big, which means the polygon is very slope, so the same fragments at the same screen_pos,generated from coplane polygons,may have very different depth value; in this case, the depth comparison will be affected by z value's imprecision and the rasterize problem, so we need 2 factors in depth bias; thx a lot

Share this post


Link to post
Share on other sites
Advertisement
This presentation explains the need for a bias proportional to the depth slope of the polygon. It has nice diagrams that make it fairly clear. Feel free to ask if you don't understand any of it.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement