# DepthBias : How is MaxDepthSlope computed?

This topic is 1417 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

DirectX documentation on DepthBias tells us that

The MaxDepthSlope value is the maximum of the horizontal and vertical slopes of the depth value at the pixel.

From my understanding, that would mean that this is the biggest value between those four:

- Depth value at the pixel minus Depth value at the pixel on the right

- Depth value at the pixel minus Depth value at the pixel on the left

- Depth value at the pixel minus Depth value at the pixel on the top

- Depth value at the pixel minus Depth value at the pixel on the bottom

However as I did not find an explicit formula in the documentation, I wonder if it's right.

Is it computed differently ?

Edited by wil_

##### Share on other sites

AFAIK, it's max(abs(ddx(z)),abs(ddy(z))), but yeah, it's hard to find the actual definition

Also, AFAIK the whole bias formula depends on your render-target format.

For integer depth buffers:
Bias = slopeScaled * max(abs(ddx(z)),abs(ddy(z))) + depthBias * 1/pow(bits_in_z_format, 2)
For floating point depth buffers:
Bias = slopeScaled * max(abs(ddx(z)),abs(ddy(z))) + depthBias * pow(exp2(max_z_in_primitive) - mantissa_bits_in_z_format, 2)

##### Share on other sites

However I just noticed this in the documentation:

The bias value is constant for a given primitive and is added to the z value for each vertex before interpolator setup.

That would mean that bias is independent from the pixels, no?

I guess then the slope is only computed from all the vertices in the rasterized primitive (for example each vertex in the triangle).

In your reply there is also this part that does not appear in the documentation:

pow(exp2(max_z_in_primitive) - mantissa_bits_in_z_format, 2)

In the documentation it is written that 2 is just multiplied by the other parameters.

2**(exponent(max z in primitive) - r)

Is that an error in MSDN?

Edited by wil_

##### Share on other sites

Going back to an earlier version of the docs before most was removed, this is the formulas shown. I have found stuff in the older version that isn't even online (March 2009 version), which is pretty much the same thing your link shows.

There are two options for calculating depth bias.

1. If the depth buffer currently bound to the output-merger stage has a UNORM format or no depth buffer is bound the bias value is calculated like this: Bias = (float)DepthBias * r + SlopeScaledDepthBias * MaxDepthSlope;
where r is the minimum representable value > 0 in the depth-buffer format converted to float32. The remaining values are structure members.
2. If a floating-point depth buffer is bound to the output-merger stage the bias value is calculated like this: Bias = (float)DepthBias * 2**(exponent(max z in primitive) - r) +
SlopeScaledDepthBias * MaxDepthSlope;
where r is the number of mantissa bits in the floating point representation (excluding the hidden bit); for example, 23 for float32.

The bias value is then clamped like this:

if(DepthBiasClamp > 0)
Bias = min(DepthBiasClamp, Bias)
else if(DepthBiasClamp < 0)
Bias = max(DepthBiasClamp, Bias)

The bias value is then used to calculate the pixel depth.

if ( (DepthBias != 0) || (SlopeScaledDepthBias != 0) )
z = z + Bias

Edited by LancerSolurus

1. 1
2. 2
Rutin
19
3. 3
JoeJ
16
4. 4
5. 5

• 35
• 23
• 13
• 13
• 17
• ### Forum Statistics

• Total Topics
631702
• Total Posts
3001812
×