Public Group

# vertex shader output interpolation problem

This topic is 2021 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

I have a question about vertex shader output interpolation. Fox example, when implement shadow map, we need a depth map.  Let's first suppose that we need a non-linear depth map, Z/W. Consider following two shaders, which is the correct way to output a non-linear depth map. I have seen online that someone use the first version, while others use second. But I think only first version is right in math.

// Version 1
void NonLinearDepthVS(in float3 iPos : POSITION, out float4 oPos : SV_Position, out float oDepth : TEXCOORD0)
{
oPos = mul(iPos, mul(World, ViewProj));
oDepth = oPos.z / oPos.w;
}

float4 NonLinearDepthPS(in float oDepth : TEXCOORD0) : SV_Target0
{
return float4(oDepth, 0, 0, 0);
}

// Version 2
void NonLinearDepthVS(in float3 iPos : POSITION, out float4 oPos : SV_Position, out float2 oDepth : TEXCOORD0)
{
oPos = mul(iPos, mul(World, ViewProj));
oDepth = oPos.zw;
}

float4 NonLinearDepthPS(in float2 oDepth : TEXCOORD0) : SV_Target0
{
return float4(oDepth.x / oDepth.y, 0, 0, 0);
}

##### Share on other sites

Unless you're using oDepth somewhere else, it's the same thing. The second version just does the division in the second function.

Edited by Khatharr

##### Share on other sites

Unless you're using oDepth somewhere else, it's the same thing. The second version just does the division in the second function.

Really? I don't think so. The rasterizer will interpolate vertex shader outputs, default is by perspective-correct interpolation. Consider rasterize a triangle, for first version, the final value of a fragment in triangle is a * (Z1/W1) + b * (Z2/W2) + c * (Z3/W3), (a, b, c) is the barycentric coordinate. (Z/W) is values on each triangle vertex. But for second version, the final value is (a*Z1 + b*Z2 + c*Z3) / (a*W1 + b*W2 + c*W3). Obviously, not the same value.

##### Share on other sites

Unless you're using oDepth somewhere else, it's the same thing. The second version just does the division in the second function.

Really? I don't think so. The rasterizer will interpolate vertex shader outputs, default is by perspective-correct interpolation. Consider rasterize a triangle, for first version, the final value of a fragment in triangle is a * (Z1/W1) + b * (Z2/W2) + c * (Z3/W3), (a, b, c) is the barycentric coordinate. (Z/W) is values on each triangle vertex. But for second version, the final value is (a*Z1 + b*Z2 + c*Z3) / (a*W1 + b*W2 + c*W3). Obviously, not the same value.

I would have to check the docs, but my guess is that texcoords are linearly interpolated.  If that's true, then both versions are basically the same, although it'd probably be better to do the division in the vertex shader.

##### Share on other sites

So the output z is divided by w behind the scene, can we then use vertex output position z as input in pixel shader? Should be the same thing.

##### Share on other sites

SV_Position's w coordinate is special -- it's used to implement perspective correct interpolation of ALL vertex outputs.

i.e. oDepth is divided by oPos.w automatically in-between the vertex and pixel shader, during rasterization/interpolation.

So if in the vertex shader you write "oDepth = oPos.z/oPos.w", then in the pixel shader, oDepth will equal oPos.z/oPos.w/oPos.w.

However, none of this is necessary. Just use a depth-stencil target instead of a colour target, and don't do anything special to output depth besides just rasterizing triangles as usual and have the hardware write them to the depth buffer.

Edited by Hodgman

##### Share on other sites

So if in the vertex shader you write "oDepth = oPos.z/oPos.w", then in the pixel shader, oDepth will equal oPos.z/oPos.w/oPos.w.

Now you got me confused. I thought that just output position is affected "behind scene"?

If we saved in vertex shader z/w in oDepth, oDepth should not be affected later on? Or i am wrong?

I am interested in DX9 pipeline, don't know about DX11.

Edited by belfegor

##### Share on other sites

So if in the vertex shader you write "oDepth = oPos.z/oPos.w", then in the pixel shader, oDepth will equal oPos.z/oPos.w/oPos.w.

Now you got me confused. I thought that just output position is affected "behind scene"?

If we saved in vertex shader z/w in oDepth, oDepth should not be affected later on? Or i am wrong?

I am interested in DX9 pipeline, don't know about DX11.

Actually,  perspective correct interpolation means that in order to do linear interpolation, it first need to divide by SV_Position's w. later, it will multiply back before pass the output to pixel shader. So If we saved in vertex shader z/w in oDepth, oDepth is equal to  the interpolated z/w in pixel shader.

##### Share on other sites

OMG.

Forgive my stupidity, can you clear some information.

...it first need to divide by SV_Position's w...

You are talking just about output positions, oDepth is untouched?

...later, it will multiply back before pass the output to pixel shader...

So it divides by w after vertex shader and then just before pixel shader it multiplies by w (restoring old value) ?

##### Share on other sites

SV_Position's w coordinate is special -- it's used to implement perspective correct interpolation of ALL vertex outputs.

i.e. oDepth is divided by oPos.w automatically in-between the vertex and pixel shader, during rasterization/interpolation.

So if in the vertex shader you write "oDepth = oPos.z/oPos.w", then in the pixel shader, oDepth will equal oPos.z/oPos.w/oPos.w.

However, none of this is necessary. Just use a depth-stencil target instead of a colour target, and don't do anything special to output depth besides just rasterizing triangles as usual and have the hardware write them to the depth buffer.

Why would oDepth be divided by oPos.w automatically?  It uses the TEXCOORD0 semantic, not SV_Position.

1. 1
2. 2
3. 3
Rutin
15
4. 4
khawk
13
5. 5
frob
12

• 9
• 9
• 11
• 11
• 23
• ### Forum Statistics

• Total Topics
633665
• Total Posts
3013247
×