Sign in to follow this  

why depth bias has slopescale parameter?

This topic is 3930 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

as we know,depth bias(dx)/polygon offset(gl) has 2 parameters: slope factor:with relationship to max(abs(delta z / delta x), abs(delta z / delta y)); bias facotr:the bias in depth; and the result is like this:(copy from dx sdk) Offset = m * D3DRS_SLOPESCALEDEPTHBIAS + D3DRS_DEPTHBIAS well,I wonder why we need the slope parameter? why not just modify the depth value with bias factor and without the slope factor; I didn't get result from google,^_^; can anyone confirm if my opinions is right? And my opinion is: the fragment comes from the rasterize of the polygon; so different slope_scale situation of polygon generate different fragment; so when max(abs(delta z / delta x), abs(delta z / delta y)); is big, which means the polygon is very slope, so the same fragments at the same screen_pos,generated from coplane polygons,may have very different depth value; in this case, the depth comparison will be affected by z value's imprecision and the rasterize problem, so we need 2 factors in depth bias; thx a lot

Share this post


Link to post
Share on other sites

This topic is 3930 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this