wilfrid 650 Report post Posted August 29, 2014 (edited) DirectX documentation on DepthBias tells us that The MaxDepthSlope value is the maximum of the horizontal and vertical slopes of the depth value at the pixel. From my understanding, that would mean that this is the biggest value between those four: - Depth value at the pixel minus Depth value at the pixel on the right - Depth value at the pixel minus Depth value at the pixel on the left - Depth value at the pixel minus Depth value at the pixel on the top - Depth value at the pixel minus Depth value at the pixel on the bottom However as I did not find an explicit formula in the documentation, I wonder if it's right. Is it computed differently ? Edited August 29, 2014 by wil_ 0 Share this post Link to post Share on other sites
Hodgman 51426 Report post Posted August 29, 2014 AFAIK, it's max(abs(ddx(z)),abs(ddy(z))), but yeah, it's hard to find the actual definition Also, AFAIK the whole bias formula depends on your render-target format. For integer depth buffers: Bias = slopeScaled * max(abs(ddx(z)),abs(ddy(z))) + depthBias * 1/pow(bits_in_z_format, 2) For floating point depth buffers: Bias = slopeScaled * max(abs(ddx(z)),abs(ddy(z))) + depthBias * pow(exp2(max_z_in_primitive) - mantissa_bits_in_z_format, 2) 1 Share this post Link to post Share on other sites
wilfrid 650 Report post Posted August 29, 2014 (edited) Thank you for your reply. However I just noticed this in the documentation: The bias value is constant for a given primitive and is added to the z value for each vertex before interpolator setup. That would mean that bias is independent from the pixels, no? I guess then the slope is only computed from all the vertices in the rasterized primitive (for example each vertex in the triangle). In your reply there is also this part that does not appear in the documentation: pow(exp2(max_z_in_primitive) - mantissa_bits_in_z_format, 2) In the documentation it is written that 2 is just multiplied by the other parameters. 2**(exponent(max z in primitive) - r) Is that an error in MSDN? Edited August 29, 2014 by wil_ 0 Share this post Link to post Share on other sites
LancerSolurus 630 Report post Posted August 29, 2014 (edited) Going back to an earlier version of the docs before most was removed, this is the formulas shown. I have found stuff in the older version that isn't even online (March 2009 version), which is pretty much the same thing your link shows. There are two options for calculating depth bias. If the depth buffer currently bound to the output-merger stage has a UNORM format or no depth buffer is bound the bias value is calculated like this: Bias = (float)DepthBias * r + SlopeScaledDepthBias * MaxDepthSlope; where r is the minimum representable value > 0 in the depth-buffer format converted to float32. The remaining values are structure members. If a floating-point depth buffer is bound to the output-merger stage the bias value is calculated like this: Bias = (float)DepthBias * 2**(exponent(max z in primitive) - r) + SlopeScaledDepthBias * MaxDepthSlope; where r is the number of mantissa bits in the floating point representation (excluding the hidden bit); for example, 23 for float32. The bias value is then clamped like this: if(DepthBiasClamp > 0) Bias = min(DepthBiasClamp, Bias) else if(DepthBiasClamp < 0) Bias = max(DepthBiasClamp, Bias) The bias value is then used to calculate the pixel depth. if ( (DepthBias != 0) || (SlopeScaledDepthBias != 0) ) z = z + Bias Edited August 29, 2014 by LancerSolurus 0 Share this post Link to post Share on other sites