vertex shader output interpolation problem

Started by
13 comments, last by hustruan 10 years, 10 months ago

I have a question about vertex shader output interpolation. Fox example, when implement shadow map, we need a depth map. Let's first suppose that we need a non-linear depth map, Z/W. Consider following two shaders, which is the correct way to output a non-linear depth map. I have seen online that someone use the first version, while others use second. But I think only first version is right in math.

// Version 1
void NonLinearDepthVS(in float3 iPos : POSITION, out float4 oPos : SV_Position, out float oDepth : TEXCOORD0)
{
oPos = mul(iPos, mul(World, ViewProj));
oDepth = oPos.z / oPos.w;
}

float4 NonLinearDepthPS(in float oDepth : TEXCOORD0) : SV_Target0
{
return float4(oDepth, 0, 0, 0);
}

// Version 2
void NonLinearDepthVS(in float3 iPos : POSITION, out float4 oPos : SV_Position, out float2 oDepth : TEXCOORD0)
{
oPos = mul(iPos, mul(World, ViewProj));
oDepth = oPos.zw;
}

float4 NonLinearDepthPS(in float2 oDepth : TEXCOORD0) : SV_Target0
{
return float4(oDepth.x / oDepth.y, 0, 0, 0);
}

Advertisement

Unless you're using oDepth somewhere else, it's the same thing. The second version just does the division in the second function.

void hurrrrrrrr() {__asm sub [ebp+4],5;}

There are ten kinds of people in this world: those who understand binary and those who don't.

Unless you're using oDepth somewhere else, it's the same thing. The second version just does the division in the second function.

Really? I don't think so. The rasterizer will interpolate vertex shader outputs, default is by perspective-correct interpolation. Consider rasterize a triangle, for first version, the final value of a fragment in triangle is a * (Z1/W1) + b * (Z2/W2) + c * (Z3/W3), (a, b, c) is the barycentric coordinate. (Z/W) is values on each triangle vertex. But for second version, the final value is (a*Z1 + b*Z2 + c*Z3) / (a*W1 + b*W2 + c*W3). Obviously, not the same value.

Unless you're using oDepth somewhere else, it's the same thing. The second version just does the division in the second function.

Really? I don't think so. The rasterizer will interpolate vertex shader outputs, default is by perspective-correct interpolation. Consider rasterize a triangle, for first version, the final value of a fragment in triangle is a * (Z1/W1) + b * (Z2/W2) + c * (Z3/W3), (a, b, c) is the barycentric coordinate. (Z/W) is values on each triangle vertex. But for second version, the final value is (a*Z1 + b*Z2 + c*Z3) / (a*W1 + b*W2 + c*W3). Obviously, not the same value.

I would have to check the docs, but my guess is that texcoords are linearly interpolated. If that's true, then both versions are basically the same, although it'd probably be better to do the division in the vertex shader.

So the output z is divided by w behind the scene, can we then use vertex output position z as input in pixel shader? Should be the same thing.

SV_Position's w coordinate is special -- it's used to implement perspective correct interpolation of ALL vertex outputs.

i.e. oDepth is divided by oPos.w automatically in-between the vertex and pixel shader, during rasterization/interpolation.

So if in the vertex shader you write "oDepth = oPos.z/oPos.w", then in the pixel shader, oDepth will equal oPos.z/oPos.w/oPos.w.

However, none of this is necessary. Just use a depth-stencil target instead of a colour target, and don't do anything special to output depth besides just rasterizing triangles as usual and have the hardware write them to the depth buffer.

So if in the vertex shader you write "oDepth = oPos.z/oPos.w", then in the pixel shader, oDepth will equal oPos.z/oPos.w/oPos.w.

Now you got me confused. I thought that just output position is affected "behind scene"?

If we saved in vertex shader z/w in oDepth, oDepth should not be affected later on? Or i am wrong?

I am interested in DX9 pipeline, don't know about DX11.

So if in the vertex shader you write "oDepth = oPos.z/oPos.w", then in the pixel shader, oDepth will equal oPos.z/oPos.w/oPos.w.

Now you got me confused. I thought that just output position is affected "behind scene"?

If we saved in vertex shader z/w in oDepth, oDepth should not be affected later on? Or i am wrong?

I am interested in DX9 pipeline, don't know about DX11.

Actually, perspective correct interpolation means that in order to do linear interpolation, it first need to divide by SV_Position's w. later, it will multiply back before pass the output to pixel shader. So If we saved in vertex shader z/w in oDepth, oDepth is equal to the interpolated z/w in pixel shader.

OMG. blink.png

Forgive my stupidity, can you clear some information.

...it first need to divide by SV_Position's w...

You are talking just about output positions, oDepth is untouched?

...later, it will multiply back before pass the output to pixel shader...


So it divides by w after vertex shader and then just before pixel shader it multiplies by w (restoring old value) ? wacko.png

SV_Position's w coordinate is special -- it's used to implement perspective correct interpolation of ALL vertex outputs.

i.e. oDepth is divided by oPos.w automatically in-between the vertex and pixel shader, during rasterization/interpolation.

So if in the vertex shader you write "oDepth = oPos.z/oPos.w", then in the pixel shader, oDepth will equal oPos.z/oPos.w/oPos.w.

However, none of this is necessary. Just use a depth-stencil target instead of a colour target, and don't do anything special to output depth besides just rasterizing triangles as usual and have the hardware write them to the depth buffer.

Why would oDepth be divided by oPos.w automatically? It uses the TEXCOORD0 semantic, not SV_Position.

This topic is closed to new replies.

Advertisement