# why depth bias has slopescale parameter?

This topic is 4218 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

as we know,depth bias(dx)/polygon offset(gl) has 2 parameters: slope factor:with relationship to max(abs(delta z / delta x), abs(delta z / delta y)); bias facotr:the bias in depth; and the result is like this:(copy from dx sdk) Offset = m * D3DRS_SLOPESCALEDEPTHBIAS + D3DRS_DEPTHBIAS well,I wonder why we need the slope parameter? why not just modify the depth value with bias factor and without the slope factor; I didn't get result from google,^_^; can anyone confirm if my opinions is right? And my opinion is: the fragment comes from the rasterize of the polygon; so different slope_scale situation of polygon generate different fragment; so when max(abs(delta z / delta x), abs(delta z / delta y)); is big, which means the polygon is very slope, so the same fragments at the same screen_pos,generated from coplane polygons,may have very different depth value; in this case, the depth comparison will be affected by z value's imprecision and the rasterize problem, so we need 2 factors in depth bias; thx a lot

##### Share on other sites
This presentation explains the need for a bias proportional to the depth slope of the polygon. It has nice diagrams that make it fairly clear. Feel free to ask if you don't understand any of it.

1. 1
2. 2
3. 3
Rutin
18
4. 4
JoeJ
14
5. 5

• 14
• 10
• 23
• 9
• 33
• ### Forum Statistics

• Total Topics
632633
• Total Posts
3007543
• ### Who's Online (See full list)

There are no registered users currently online

×