Hello,
I've been looking for info but I couldn't find any about this. Normally on Dx9 using the FVF I was able to pack Color into an unsigned byte, 8 bits per component, and the semantic on the hlsl shader was just COLOR and everything worked out:.
On D3D11 however, using the Input Layout interface I'm declaring:
static D3D11_INPUT_ELEMENT_DESC vertexDesc_2[] =
{
{ "POSITION", 0, DXGI_FORMAT_R32G32_FLOAT, 0, 0, D3D11_INPUT_PER_VERTEX_DATA, 0 },
{ "TEXCOORD", 0, DXGI_FORMAT_R32G32_FLOAT, 0, 8, D3D11_INPUT_PER_VERTEX_DATA, 0 },
{ "COLOR", 0, DXGI_FORMAT_R8G8B8A8_UINT, 0, 16, D3D11_INPUT_PER_VERTEX_DATA, 0 },
};
Just 2 components for the position X and Y, 4 byte each, 2 components for U and V, 4 bytes each and Color 1 component 1 byte.
On my Vertex program:
struct VertexShaderInput
{
float2 pos : POSITION;
float2 texcoord : TEXCOORD;
float4 color : COLOR;
};
And obviously it doesn't work, the debugger tells me:
ID3D11Device::CreateInputLayout: The provided input signature expects to read an element with SemanticName/Index: 'COLOR'/0 and component(s) of the type 'float32'. However, the matching entry in the Input Layout declaration, element[2], specifies mismatched format: 'R8G8B8A8_UINT'.
I understand the problem but I don't know how to proceed, there is very little info about this and I really don't want to waste 16 bytes by expanding the colors using one float per color component. I know it's possible because that's how blend weights and blend indices works for skinning, this is the same just for colors.
Thank you for your help!