• Advertisement

DX11 HLSL PixelInputType semantics passing wrong values to pixel shader

Recommended Posts

I've been trying for hours now to find the cause of this problem. My vertex shader is passing the wrong values to the pixel shader, and I think it might be my input/output semantics.

*This shader takes in a prerendered texture with shadows in it, based on Rastertek Tutorial 42. So the light/dark values of the shadows are already encoded in the blurred shadow texture, sampled from Texture2D shaderTextures[7] at index 6 in the pixel shader.

struct VertexInputType
{
    float4 vertex_ModelSpace : POSITION;
    float2 tex : TEXCOORD0;
    float3 normal : NORMAL;
    float3 tangent : TANGENT;
    float3 binormal : BINORMAL;
};

struct PixelInputType
{
    float4 vertex_ModelSpace : SV_POSITION;
    float2 tex : TEXCOORD0;
    float3 normal : NORMAL;
    float3 tangent : TANGENT;
    float3 binormal : BINORMAL;
    float3 viewDirection : TEXCOORD1;
    float3 lightPos_LS[NUM_LIGHTS] : TEXCOORD2;
    float4 vertex_ScrnSpace : TEXCOORD5;
};

Specifically PixelInputType is causing a ton of trouble - if I switch the tags "SV_POSITION" for the first variable and "TEXCOORD5" for the last one, it gives completely different values to the Pixel shader even though all the calculations are exactly the same. Specifically, the main issues are that I have a spotlight effect in the pixel shader that takes the dot product of the light to Surface Vector with the light direction to give it a falloff, which was previously working, but in this upgraded version of the shader, it seems to be giving completely wrong values.
/

(See full vertex shader code below). Is there some weird thing about pixel shader semantics that Im missing? Does the order of the variables in the struct matter? I've also attached teh full shader files for reference. Any insight would be much appreciated, thanks.

PixelInputType main( VertexInputType input )
{
    //The final output for the vertex shader
    PixelInputType output;

    // Pass through tex coordinates untouched
    output.tex  = input.tex;

    // Pre-calculate vertex position in world space
    input.vertex_ModelSpace.w = 1.0f;

    // Calculate the position of the vertex against the world, view, and projection matrices.
    output.vertex_ModelSpace = mul(input.vertex_ModelSpace, cb_worldMatrix);
    output.vertex_ModelSpace = mul(output.vertex_ModelSpace, cb_viewMatrix);
    output.vertex_ModelSpace = mul(output.vertex_ModelSpace, cb_projectionMatrix);

    // Store the position of the vertice as viewed by the camera in a separate variable.
    output.vertex_ScrnSpace = output.vertex_ModelSpace;

    // Bring normal, tangent, and binormal into world space
    output.normal = normalize(mul(input.normal, (float3x3)cb_worldMatrix));
    output.tangent = normalize(mul(input.tangent, (float3x3)cb_worldMatrix));
    output.binormal = normalize(mul(input.binormal, (float3x3)cb_worldMatrix));

    // Store worldspace view direction for specular calculations
    float4 vertex_WS = mul(input.vertex_ModelSpace, cb_worldMatrix);
    output.viewDirection = normalize(cb_camPosition_WS.xyz - vertex_WS.xyz);

    for(int i = 0; i< NUM_LIGHTS; ++i)
    {
        // Calculate light position relative to the vertex in WORLD SPACE
        output.lightPos_LS[i] = cb_lights[i].lightPosition_WS - vertex_WS.xyz;
    }

    return output;
}


Repo link:

https://github.com/mister51213/DirectX11Engine/tree/master/DirectX11Engine

 

Light_SoftShadows_ps.hlsl

Light_SoftShadows_vs.hlsl

Share this post


Link to post
Share on other sites
Advertisement

Semantics starting with SV_ are reserved "System Value" semantics which bring extended meaning and can behave differently than user semantics. Specifically in your case SV_Position will be modified by the rasterizer stage and will contain pixel coordinates in the XY components inside the pixel shader, while a TEXCOORD5 user semantic will contain the interpolated clip space coordinates as you have written them in the vertex shader.

The order of the input/output struct is important in the way that it must be the same in the different shader stages that write/read them.

Share this post


Link to post
Share on other sites
3 minutes ago, turanszkij said:

Semantics starting with SV_ are reserved "System Value" semantics which bring extended meaning and can behave differently than user semantics. Specifically in your case SV_Position will be modified by the rasterizer stage and will contain pixel coordinates in the XY components inside the pixel shader, while a TEXCOORD5 user semantic will contain the interpolated clip space coordinates as you have written them in the vertex shader.

The order of the input/output struct is important in the way that it must be the same in the different shader stages that write/read them.

So the var with the SV_POSITION tag gets OVERWRITTEN by the Gpu but any other tagged variable will be whatever I assign it, is that correct? And other than the PixelInput struct declaration being exactly the same in vertex and pixel shaders, there is no requirement for the members of the struct to be in a particular order, right? Also, do these user tags like TEXCOORD5 need to be defined anywhere else as do the vertexinput semantics in the shader class? Or can they just be named whatever I want? Thank you so much.

Share this post


Link to post
Share on other sites

You can name the shader semantics however you want (except for the System-Value ones). The order of the semantics should match between the stages, but a following stage can also be just a subset of the input structure. For example sometimes there is no need for a pixel shader to read the whole contents of a vertex shader output structure, so the declaration of the struct can just be a subset of that in the PS, but with the same order and data layout. And lastly, there is nowhere else that you should declare shader structure names except in the shaders themselves, except only for the input layout as you mentioned.

Edited by turanszkij

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


  • Advertisement
  • Advertisement
  • Popular Tags

  • Advertisement
  • Popular Now

  • Similar Content

    • By matt77hias
      Is it common to have more than one ID3D11Device and/or associated immediate ID3D11DeviceContext?
      If I am correct a single display subsystem (GPU, video memory, etc.) is completely determined (from a 3D rendering perspective) by a
      IDXGIAdapter (meta functionality facade); ID3D11Device (resource creation facade); ID3D11DeviceContext (pipeline facade). So given that you want to use multiple display subsystems, you will have to handle multiple of these interfaces. A concrete example would be a graphics card dedicated to rendering and a separate graphics card dedicated to computation, or combining an integrated and dedicated graphics card. All such cases seem to me quite far fetched to justify support in a majority of games. So moving one abstraction level further downstream, should a game engine even consider multiple display systems (i.e. there is just one ID3D11Device and one immediate ID3D11DeviceContext)?
    • By katastrophic88
       

      Hello Everyone!
      I’m now working with Zugalu -- developer of an upcoming SHMUP named Technolites – as Community Manager.
      In Technolites, you’ll take command of a fully customizable ship and dash across the universe to defeat an ancient alien threat. Deploy more than 500 weapons and utility upgrades – fighting by yourself or with a friend. The fate of humanity rests in your hands. We’re currently live on Kickstarter, and we’d love it if you could check out our page! Any feedback is greatly appreciated
      https://www.kickstarter.com/projects/643340596/technolites-a-story-driven-shoot-em-up-by-zugalu
      Looking forward to seeing all of the great game projects being developed and launching this year!
    • By Nimmagadda Subba Rao
      Hi,
         I am a CAM developer working with C++ and C# for the past 5 years. I started working on DirectX from past 6 months. I developed a touch screen control viewer using Direct2D. I am working on 3D viewer currently. I am very slow with working on Direct3D. I want to be a gaming developer. As i am new to this i want to know what are the possibilities to explore in this area. How to start developing gaming engines? Is it through tutorials? I heard suggestions from my friends that going for an MS helps. I am not sure on which path to choose. Is it better to go for higher studies and start exploring? I am currently working in India. I want to go to Canada and settle there. Are there any good universities there to learn about graphics programming? Sorry if I am asking too many questions but i want to know the options to choose to get ahead. 
    • By pcmaster
      Hi all, I have another "niche" architecture error
      On our building servers, we're using head-less machines on which we're running DX11 WARP in a console session, that is D3D_DRIVER_TYPE_WARP plus D3D_FEATURE_LEVEL_11_0. It's Windows 7 or Windows Server 2008 R2 with "Platform Update for Windows 7". Everything's been fine, it's running all kinds of complex rendering, compute shaders, UAVs, everything fine and even fast.
      The problem: Writes to a cubemap array specific slice and specific mipmap using PS+UAV seem to be dropped.
      Do note that with D3D_DRIVER_TYPE_HARDWARE it works correctly; I can reproduce the bug on any normal workstation (also Windows 7 x64) with D3D_DRIVER_TYPE_WARP.
      The shader in question is a simple average 4->1 mipmapping PS, which samples a source SRV texture and writes into a UAV like this:
       
      RWTexture2DArray<float4> array2d; array2d[int3(xy, arrayIdx)] = avg_float4_value; The output merger is set to do no RT writes, the only output is via that one UAV.
      Note again that with a normal HW driver (GeForce) it works right, but with WARP it doesn't.
      Any ideas how I could debug this, to be sure it's really WARP causing this? Do you think RenderDoc will capture also a WARP application (using their StartFrameCapture/EndFrameCapture API of course, since the there's no window nor swap chain)? EDIT: RenderDoc does make a capture even with WARP, wow
      Thanks!
  • Advertisement