Shader-sematics

Started by
2 comments, last by MJP 11 years, 9 months ago
Today I got a problem with my shader and its input from the CPU.

My inputlayout (c++ side) looks like this:


layout[0].SemanticName = "POSITION";
layout[0].SemanticIndex = 0;
layout[0].Format = DXGI_FORMAT_R32G32B32_FLOAT;
layout[0].InputSlot = 0;
layout[0].AlignedByteOffset = 0;
layout[0].InputSlotClass = D3D11_INPUT_PER_VERTEX_DATA;
layout[0].InstanceDataStepRate = 0;
layout[1].SemanticName = "NORMAL";
layout[1].SemanticIndex = 0;
layout[1].Format = DXGI_FORMAT_R32G32B32_FLOAT;
layout[1].InputSlot = 0;
layout[1].AlignedByteOffset = D3D11_APPEND_ALIGNED_ELEMENT;
layout[1].InputSlotClass = D3D11_INPUT_PER_VERTEX_DATA;
layout[1].InstanceDataStepRate = 0;
layout[2].SemanticName = "TEXCOORD";
layout[2].SemanticIndex = 0;
layout[2].Format = DXGI_FORMAT_R32G32_FLOAT;
layout[2].InputSlot = 0;
layout[2].AlignedByteOffset = D3D11_APPEND_ALIGNED_ELEMENT;
layout[2].InputSlotClass = D3D11_INPUT_PER_VERTEX_DATA;
layout[2].InstanceDataStepRate = 0;
layout[3].SemanticName = "COLOR";
layout[3].SemanticIndex = 0;
layout[3].Format = DXGI_FORMAT_R32G32B32A32_FLOAT;
layout[3].InputSlot = 0;
layout[3].AlignedByteOffset = D3D11_APPEND_ALIGNED_ELEMENT;
layout[3].InputSlotClass = D3D11_INPUT_PER_VERTEX_DATA;
layout[3].InstanceDataStepRate = 0;



My shader takes this:



struct VertexInputType
{
float4 position : POSITION;
float3 normal : NORMAL;
float2 tex : TEXCOORD0;
float4 color : COLOR0;
};



Now, when I try to add another colorinput, COLOR1, nothing gets drawn by the shader. I noticed, that as soon as I change the inputlayout-semantic to TEXCOORD0 or COLOR0, nothing gets drawn either. What could cause this?

Ps: What does the SV_TARGET in "float4 TerrainPixelShader(PixelInputType input) : SV_TARGET {" ? I never got that.
Advertisement
Hiya,


Im not sure I'd be much help here as I'm throwing this out from memory but....
From what I remember the 2 layouts, your c++ and shader should match exactly. However your c++ is setup with :-


layout[0].Format = DXGI_FORMAT_R32G32B32_FLOAT;

which is a 3 component input.
and the shader is expecting

float4 position : POSITION;

a 4 component input. However I cant remember if this makes a difference or not.

The input semantic index is separate from the semantic name in the c++ input-layout. So you dont use "COLOR1" as a semantic name c++ side.
For COLOR1:-

layout[4].SemanticName = "COLOR";
layout[4].SemanticIndex = 1;
layout[4].Format = DXGI_FORMAT_R32G32B32A32_FLOAT;
layout[4].InputSlot = 0;
layout[4].AlignedByteOffset = D3D11_APPEND_ALIGNED_ELEMENT;
layout[4].InputSlotClass = D3D11_INPUT_PER_VERTEX_DATA;
layout[4].InstanceDataStepRate = 0;

then you should be able to use COLOR1 in the shader.
Using PIX will show you the exact layout and the data thats being passed through each stage of the render pipeline, including shader inputs and variables. So I'd start there to see what's going on gpu side.

SV_TARGET is a semantic for the currently set render target. You can append an index value to direct pixel output to the corresponding render target. Of course the render targets need to be valid and have been created on the c++ side first. It can be used for writing to multiple render targets in one pass, used typically with but not limited to deferred renderers.
Thanks a lot, that was right the answer I needed!

layout[4].SemanticName ="COLOR";
layout[4].SemanticIndex = 1;

[background=rgb(250, 251, 252)]These two lines fixed it, the first part seems not to matter.[/background]

It's totally safe to have your shader take a float4 if the input layout specifes a float3, the shader will just get a 1.0 in the w component.

If you have input layout mismatches like the one you had before you fixed the COLOR index, the debug runtimes put an error message in the debug output. To enable the debug runtimes, pass D3D11_CREATE_DEVICE_DEBUG when creating the device.

This topic is closed to new replies.

Advertisement