Jump to content
  • Advertisement

DX11 HLSL PixelInputType semantics passing wrong values to pixel shader

Recommended Posts

I've been trying for hours now to find the cause of this problem. My vertex shader is passing the wrong values to the pixel shader, and I think it might be my input/output semantics.

*This shader takes in a prerendered texture with shadows in it, based on Rastertek Tutorial 42. So the light/dark values of the shadows are already encoded in the blurred shadow texture, sampled from Texture2D shaderTextures[7] at index 6 in the pixel shader.

struct VertexInputType
{
    float4 vertex_ModelSpace : POSITION;
    float2 tex : TEXCOORD0;
    float3 normal : NORMAL;
    float3 tangent : TANGENT;
    float3 binormal : BINORMAL;
};

struct PixelInputType
{
    float4 vertex_ModelSpace : SV_POSITION;
    float2 tex : TEXCOORD0;
    float3 normal : NORMAL;
    float3 tangent : TANGENT;
    float3 binormal : BINORMAL;
    float3 viewDirection : TEXCOORD1;
    float3 lightPos_LS[NUM_LIGHTS] : TEXCOORD2;
    float4 vertex_ScrnSpace : TEXCOORD5;
};

Specifically PixelInputType is causing a ton of trouble - if I switch the tags "SV_POSITION" for the first variable and "TEXCOORD5" for the last one, it gives completely different values to the Pixel shader even though all the calculations are exactly the same. Specifically, the main issues are that I have a spotlight effect in the pixel shader that takes the dot product of the light to Surface Vector with the light direction to give it a falloff, which was previously working, but in this upgraded version of the shader, it seems to be giving completely wrong values.
/

(See full vertex shader code below). Is there some weird thing about pixel shader semantics that Im missing? Does the order of the variables in the struct matter? I've also attached teh full shader files for reference. Any insight would be much appreciated, thanks.

PixelInputType main( VertexInputType input )
{
    //The final output for the vertex shader
    PixelInputType output;

    // Pass through tex coordinates untouched
    output.tex  = input.tex;

    // Pre-calculate vertex position in world space
    input.vertex_ModelSpace.w = 1.0f;

    // Calculate the position of the vertex against the world, view, and projection matrices.
    output.vertex_ModelSpace = mul(input.vertex_ModelSpace, cb_worldMatrix);
    output.vertex_ModelSpace = mul(output.vertex_ModelSpace, cb_viewMatrix);
    output.vertex_ModelSpace = mul(output.vertex_ModelSpace, cb_projectionMatrix);

    // Store the position of the vertice as viewed by the camera in a separate variable.
    output.vertex_ScrnSpace = output.vertex_ModelSpace;

    // Bring normal, tangent, and binormal into world space
    output.normal = normalize(mul(input.normal, (float3x3)cb_worldMatrix));
    output.tangent = normalize(mul(input.tangent, (float3x3)cb_worldMatrix));
    output.binormal = normalize(mul(input.binormal, (float3x3)cb_worldMatrix));

    // Store worldspace view direction for specular calculations
    float4 vertex_WS = mul(input.vertex_ModelSpace, cb_worldMatrix);
    output.viewDirection = normalize(cb_camPosition_WS.xyz - vertex_WS.xyz);

    for(int i = 0; i< NUM_LIGHTS; ++i)
    {
        // Calculate light position relative to the vertex in WORLD SPACE
        output.lightPos_LS[i] = cb_lights[i].lightPosition_WS - vertex_WS.xyz;
    }

    return output;
}


Repo link:

https://github.com/mister51213/DirectX11Engine/tree/master/DirectX11Engine

 

Light_SoftShadows_ps.hlsl

Light_SoftShadows_vs.hlsl

Share this post


Link to post
Share on other sites
Advertisement

Semantics starting with SV_ are reserved "System Value" semantics which bring extended meaning and can behave differently than user semantics. Specifically in your case SV_Position will be modified by the rasterizer stage and will contain pixel coordinates in the XY components inside the pixel shader, while a TEXCOORD5 user semantic will contain the interpolated clip space coordinates as you have written them in the vertex shader.

The order of the input/output struct is important in the way that it must be the same in the different shader stages that write/read them.

Share this post


Link to post
Share on other sites
3 minutes ago, turanszkij said:

Semantics starting with SV_ are reserved "System Value" semantics which bring extended meaning and can behave differently than user semantics. Specifically in your case SV_Position will be modified by the rasterizer stage and will contain pixel coordinates in the XY components inside the pixel shader, while a TEXCOORD5 user semantic will contain the interpolated clip space coordinates as you have written them in the vertex shader.

The order of the input/output struct is important in the way that it must be the same in the different shader stages that write/read them.

So the var with the SV_POSITION tag gets OVERWRITTEN by the Gpu but any other tagged variable will be whatever I assign it, is that correct? And other than the PixelInput struct declaration being exactly the same in vertex and pixel shaders, there is no requirement for the members of the struct to be in a particular order, right? Also, do these user tags like TEXCOORD5 need to be defined anywhere else as do the vertexinput semantics in the shader class? Or can they just be named whatever I want? Thank you so much.

Share this post


Link to post
Share on other sites

You can name the shader semantics however you want (except for the System-Value ones). The order of the semantics should match between the stages, but a following stage can also be just a subset of the input structure. For example sometimes there is no need for a pixel shader to read the whole contents of a vertex shader output structure, so the declaration of the struct can just be a subset of that in the PS, but with the same order and data layout. And lastly, there is nowhere else that you should declare shader structure names except in the shaders themselves, except only for the input layout as you mentioned.

Edited by turanszkij

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

  • Advertisement
  • Advertisement
  • Popular Tags

  • Popular Now

  • Advertisement
  • Similar Content

    • By Gnollrunner
      Hi again,  After some looking around I have decided to base my game directly on Direct X rather than using an existing game engine.  Because of the nature of the stuff I'm doing it just didn't seem to fit very well and I kept running into road blocks.  At this point I have a big blob of code for doing fractal world generation and some collision code,  and I'm trying to put it into some form that resembles a game engine.  Since I've never used one before It's a bit alien to me ..... so can someone direct me to a book, website, article, whatever... that covers this?  I'm mainly looking for stuff that covers C++ library design. I'm not adverse to using 3rd party tools for stuff I can used them for.
    • By LuigiLuigi
      I've been working on my own Metroidvania via GameMaker Studio for the past few years. You play as a bat named Ralph as he goes on an adventure to obtain 7 Crystal Medallions hidden in dungeons with the help of a cult known as the Crimson Fog. Along the way, there will be quests unlocked in Cedrus Village as you progress through the game. I've managed to complete a demo of the game up to the first dungeon and boss fight.
      I have only a PC build available, and the only gamepads I managed to install were Logitech Precision and Xbox PC gamepads. I had some trouble on gamepad detection though, so they may have connection issues. The desktop controls are similar to Terarria's control scheme if it's too much trouble. I don't have any music at this point, I'll need to get someone else to compose it later on. The music I make isn't bad, but it doesn't fit the aesthetic that well.
      I'm really hoping I can get feedback regarding the general content.
      Crimson Fog.0.2.zip
    • By mmmax3d
      Hi everyone,
      I would need some assistance from anyone who has a similar experience
      or a nice idea!
      I have created a skybox (as cube) and now I need to add a floor/ground.
      The skybox is created from cubemap and initially it was infinite.
      Now it is finite with a specific size. The floor is a quad in the middle
      of the skybox, like a horizon.
      I have two problems:
      When moving the skybox upwards or downwards, I need to
      sample from points even above the horizon while sampling
      from the botton at the same time.  I am trying to create a seamless blending of the texture
      at the points of the horizon, when the quad is connected
      to the skybox. However, I get skew effects. Does anybody has done sth similar?
      Is there any good practice?
      Thanks everyone!
    • By 4d3d
      Hi there,
      I've been away from 3d Art whilst on Maternity leave and just started to get an hour a day (if i'm lucky) to model while my baby sleeps. This is also my reason for picking something small. Really i'm after some feedback, good or bad, on any improvements, tips on rendering etc. 

      Any feedback would be massively appreciated as my time is so precious at the moment that i don't often have time to watch tutorials and research techniques so anything to point me in the right direction would be great.

      I've baked down from High-poly and exposed some custom color changing, decals, and number plate naming from substance designer and imported to marmoset.



  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!