What values get interpolated passing from VS to PS?

Started by
8 comments, last by Buckeye 9 years, 1 month ago

Due to something I see when rendering specular reflections, I'm trying to determine if there is a difference in what or how values get interpolated across a polygon - i.e., what vertex shader outputs are interpolated when they're sent to the pixel shader. Specifically, I have noticed a difference in specular reflection rendering depending on whether the view direction from the world position to the eye position (camera position, view position) used to determine the specular contribution is calculated in the vertex shader vs. calculated in the pixel shader. The remainder of the shader process is the same between the two cases (constant buffers, SV_POSITION calc, etc.)

Can someone enlighten me, perhaps providing information and/or a link describing what or how output values from a vertex shader get interpolated for pixel shader input? It's a learning process and, when coding my shaders, I'd like to understand a bit better what I can expect as pixel shader input.

What I observed:

=============

For testing purposes, I calculate in the vertex shader both the vertex world position, and view direction between the vertex and the eye position. Both values are passed via the vertex shader "output" structure to the pixel shader as an "input" structure.


struct PS_INPUT
{
    float4 Pos : SV_POSITION;
    float2 Tex : TEXCOORD0;
    float3 normal : NORMAL;
    float3 viewDirection : TEXCOORD1;
    float3 worldPosition : TEXCOORD2;
};

// vertex shader
    PS_INPUT output = (PS_INPUT)0;
    output.Pos = mul(input.Pos, ObjWorld); // per object
    // for comparison, output both vertex world position and calculated view direction.
    // both output.worldPosition and output.viewDirection are float3, TEXCOORD semantic
    output.worldPosition = output.Pos.xyz; // capture world position
    // ... output.Pos multiplied by view-projection as SV_POSITION output
    output.viewDirection = normalize(eyePos.xyz - worldPosition); // float4 eyePos is in a per-frame constant buffer
    // EDIT: above line later revised to the following. See later comments in this thread.
    output.viewDirection = eyePos.xyz - worldPosition;

// pixel shader
    float3 pLightDir = normalize(-lightDir.xyz); // true light direction is provided in a constant buffer
    reflection = normalize(2 * lightIntensity * input.normal - pLightDir);
    // Case 1 - use the view direction calc'd in the VS
    specular = pow(saturate(dot(reflection, input.viewDirection)), Power) * SpecularColor;
    // EDIT: the above line later revised to the following. See comments below in this thread.
    specular = pow(saturate(dot(reflection, normalize(input.viewDirection))), Power) * SpecularColor;
    // Case 2 - calculate the view direction in the PS
    specular = pow(saturate(dot(reflection, normalize(eyePos.xyz - input.worldPosition))), Power) * SpecularColor;

The only difference between the results shown below is that the view direction in the specular calc is - Case 1: in the VS, Case 2: in the PS.

Case 1 - the rendered results of that specular calculation appear to exhibit artifacts related to vertex position.

[attachment=26259:vertex_specular.png][attachment=26260:vertex_specular_wireframe.png]

Case 2 - the rendered results appear to be smooth across the polygon.

[attachment=26261:pixel_specular.png]

Please don't PM me with questions. Post them in the forums for everyone's benefit, and I can embarrass myself publicly.

You don't forget how to play when you grow old; you grow old when you forget how to play.

Advertisement

This isn’t about specific rules on how things get interpolated—they all get interpolated linearly (unless you use an interpolation modifier).

Your problem is a logical one. Interpolating the view vector is not the same as interpolating the position across the triangle.
As the position goes linearly away from the viewer the changes in the view direction will be smaller and smaller.
If you interpolate the view direction directly instead, the changes in the view direction will all be linear.

You have to interpolate the position and derive the view vector for each pixel.

Also, for future reference, since interpolations are linear, the end points of the view vector might all be normalized, but the points between them (in the pixel shader) will not be. If you interpolate any normals, don’t normalize them in the vertex shader since you have to normalize them in the pixel shader anyway.


L. Spiro

I restore Nintendo 64 video-game OST’s into HD! https://www.youtube.com/channel/UCCtX_wedtZ5BoyQBXEhnVZw/playlists?view=1&sort=lad&flow=grid


they all get interpolated linearly ... since interpolations are linear, the end points of the view vector might all be normalized, but the points between them (in the pixel shader) will not be.

Thanks for the response. In particular, the angle (implied by the normalized view direction calc'd in the VS) changing linearly vs. "per-pixel" certainly makes sense. I'll have to do some pencil and paper work to understand all the implications, particularly the apparent difference (as seen in the first image) for interpolation (linear or not) along an edge being different than across the polygon. blink.png

Please don't PM me with questions. Post them in the forums for everyone's benefit, and I can embarrass myself publicly.

You don't forget how to play when you grow old; you grow old when you forget how to play.

particularly the apparent difference

Half of that is because you didn’t renormalize the view vector in the pixel shader.


L. Spiro

I restore Nintendo 64 video-game OST’s into HD! https://www.youtube.com/channel/UCCtX_wedtZ5BoyQBXEhnVZw/playlists?view=1&sort=lad&flow=grid

Interesting. I really appreciate the follow-up.

Indeed, revising the specular calc in the PS to:


specular = pow(saturate(dot(reflection, normalize(input.viewDirection))), Power) * SpecularColor;

eliminated the artifacts shown in the first image. Appearance is now similar-to/same-as the last image.

Please don't PM me with questions. Post them in the forums for everyone's benefit, and I can embarrass myself publicly.

You don't forget how to play when you grow old; you grow old when you forget how to play.

Appearance is now similar-to/same-as the last image.

It should be the same, save for small rounding errors.

I’ve been going over the math in my head since the first post and actually it will work either way as long as you normalize in both cases.
You can ignore most of it.


L. Spiro

I restore Nintendo 64 video-game OST’s into HD! https://www.youtube.com/channel/UCCtX_wedtZ5BoyQBXEhnVZw/playlists?view=1&sort=lad&flow=grid

I am being semi-retarded. What I said in the first post is correct.

Your appearance is only similar, not the same. I had to get away from the office and clear my head of the math I was doing there and re-approach this in a more focused state.

This reply is just to make sure you see that new information has been posted.


L. Spiro

I restore Nintendo 64 video-game OST’s into HD! https://www.youtube.com/channel/UCCtX_wedtZ5BoyQBXEhnVZw/playlists?view=1&sort=lad&flow=grid

Thanks for the update. I really appreciate your time and effort.

Perhaps I'm misinterpreting how the linear interpolation works (which may well be the case), but (from my assumptions regarding interpolation) the results would be the same. Please correct me. I'm not trying to be argumentative, honest wink.png

Consider the situation below.

[attachment=26268:eye_position.png]

V0, V1 : polygon vertices

Ep : eye position

vDir0, vDir1 : view directions (unnormalized) calculated at V0 and V1 respectively in the vertex shader.

S : fraction of distance between V0 and V1 where pixel is to be rendered

IF I understand correctly, the interpolated view direction will be:

vDir(s) = lerp(vDir0, vDir1, s) = lerp( ( Ep - V0 ), (Ep - V1), s ) = (Ep-V0) + ( (Ep-V1) - (Ep-V0) )*s = Ep - lerp( V0, V1, s)

Assuming SV_POSITION in the pixel shader is lerp( V0, V1, s ), the view direction in the pixel shader would also be Ep - lerp( V0, V1, s)

EDIT: As L. Spiro mentioned above, the view direction in the pixel shader must be normalized for use in the specular calc.

Please don't PM me with questions. Post them in the forums for everyone's benefit, and I can embarrass myself publicly.

You don't forget how to play when you grow old; you grow old when you forget how to play.

Yes, you can do the subtraction to get the vector from the vert to the eye in the vertex shader (and interpolate that). But if you normalize in the vertex shader, then all bets are off. The length of the original vector is important during the interpolation process - normalizing is a non-linear operation, so it alters the result.


The length of the original vector is important during the interpolation process

Correct. Actually, rather than calling it "view direction" in the vertex shader, calling it "relative eye position" would make the intent clearer. As L. Spiro mentions, that's the primary reason for the original problem. The other "half" of the problem (also identified by L. Spiro) is that the interpolated value needs to be normalized in the pixel shader.

NOTE: vert and pix shader code edited in post #1 above to reflect the discussion - i.e., the relative eye position (called "viewDirection") is left unnormalized in the vertex shader, and, in the pixel shader, the interpolated relative eye position is normalized, making it the "viewDirection" for the specular color calc.

Please don't PM me with questions. Post them in the forums for everyone's benefit, and I can embarrass myself publicly.

You don't forget how to play when you grow old; you grow old when you forget how to play.

This topic is closed to new replies.

Advertisement