why depth bias has slopescale parameter?

Started by
0 comments, last by AndyTX 17 years, 1 month ago
as we know,depth bias(dx)/polygon offset(gl) has 2 parameters: slope factor:with relationship to max(abs(delta z / delta x), abs(delta z / delta y)); bias facotr:the bias in depth; and the result is like this:(copy from dx sdk) Offset = m * D3DRS_SLOPESCALEDEPTHBIAS + D3DRS_DEPTHBIAS well,I wonder why we need the slope parameter? why not just modify the depth value with bias factor and without the slope factor; I didn't get result from google,^_^; can anyone confirm if my opinions is right? And my opinion is: the fragment comes from the rasterize of the polygon; so different slope_scale situation of polygon generate different fragment; so when max(abs(delta z / delta x), abs(delta z / delta y)); is big, which means the polygon is very slope, so the same fragments at the same screen_pos,generated from coplane polygons,may have very different depth value; in this case, the depth comparison will be affected by z value's imprecision and the rasterize problem, so we need 2 factors in depth bias; thx a lot
Advertisement
This presentation explains the need for a bias proportional to the depth slope of the polygon. It has nice diagrams that make it fairly clear. Feel free to ask if you don't understand any of it.

This topic is closed to new replies.

Advertisement