Sign in to follow this  
ccanan

why depth bias has slopescale parameter?

Recommended Posts

as we know,depth bias(dx)/polygon offset(gl) has 2 parameters: slope factor:with relationship to max(abs(delta z / delta x), abs(delta z / delta y)); bias facotr:the bias in depth; and the result is like this:(copy from dx sdk) Offset = m * D3DRS_SLOPESCALEDEPTHBIAS + D3DRS_DEPTHBIAS well,I wonder why we need the slope parameter? why not just modify the depth value with bias factor and without the slope factor; I didn't get result from google,^_^; can anyone confirm if my opinions is right? And my opinion is: the fragment comes from the rasterize of the polygon; so different slope_scale situation of polygon generate different fragment; so when max(abs(delta z / delta x), abs(delta z / delta y)); is big, which means the polygon is very slope, so the same fragments at the same screen_pos,generated from coplane polygons,may have very different depth value; in this case, the depth comparison will be affected by z value's imprecision and the rasterize problem, so we need 2 factors in depth bias; thx a lot

Share this post


Link to post
Share on other sites
This presentation explains the need for a bias proportional to the depth slope of the polygon. It has nice diagrams that make it fairly clear. Feel free to ask if you don't understand any of it.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this