Here's my Input Layout descriptor element
D3D11_INPUT_ELEMENT_DESC inputLayoutDesc[] =
{
{
"TEXCOORD", //Semantic name
0, //Semantic Index
DXGI_FORMAT_R32G32_FLOAT, //DXGI Format
0, //Input Slot
D3D11_APPEND_ALIGNED_ELEMENT, //Number of bytes from start of array to this element
D3D11_INPUT_PER_VERTEX_DATA, //Input Classification (we are using per vertex not per instance data)
0 //Instance data step rate, 0 for Per-Vertex data
},
{
"POSITION", //Semantic name
0, //Semantic Index
DXGI_FORMAT_R32G32B32_FLOAT, //DXGI Format
0, //Input Slot
D3D11_APPEND_ALIGNED_ELEMENT, //Number of bytes from start of array to this element
D3D11_INPUT_PER_VERTEX_DATA, //Input Classification (we are using per vertex not per instance data)
0 //Instance data step rate, 0 for Per-Vertex data
},
};
Here's my Vertex Shader
struct VS_INPUT
{
float3 Pos : POSITION0;
};
struct PS_INPUT
{
float4 Pos : SV_POSITION; //Screen-space position
};
PS_INPUT VS(VS_INPUT input)
{
PS_INPUT output = (PS_INPUT)0;
output.Pos = float4(input.Pos, 1.0f);
return output;
}
This works fine...meaning I am able to create my input layout object and set it to the input assembler stage.
My question is, why does this work?
From what I have read the input layout has to match the vertex shader input signature. This clearly does not. It seems to me as long as it can match the Position semantic it will allow me to create the object and set it.
It seems like this should not work because it does not exactly match the expected input to the vertex shader.