There's something I don't understand with the way DX11 HLSL parse a constant buffer when it's time to "transform it" in a matrix.
I've declared a constant buffer like this:
cbuffer cbPerMesh : register( b1 )
{
matrix<float,3,4> World;
}
The shader is compiled with D3DCOMPILE_PACK_MATRIX_COLUMN_MAJOR.
The shader expects to receive matrix in a colum major form, but it's not working as I supposed to.
I'm feeding the shader with this data:
f32 temp[12] = { 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12 };
gpuProg.updateConstantBuffer (hCBPerMesh, &temp, sizeof(f32) * 12);
To my understanding, the World matrix should be like this:
1 2 3 4
5 6 7 8
9 10 11 12
ie: 3 rows, 4 columns per row, feeded by an array of 12 floats.
By debugging the shader, I'm able to see the following:
Data buffer:
#,Indirizzo,Float4
0,[0x00000000-0x0000000F],{+1, +2, +3, +4}
1,[0x00000010-0x0000001F],{+5, +6, +7, +8}
2,[0x00000020-0x0000002F],{+9, +10, +11, +12}
which is right, and means that I'm transferring the buffer correctly.
What I don't understand, is that the World matrix, inside the vertex shader, appears like this:
World[0] x = 1.000000000, y = 5.000000000, z = 9.000000000, w = 0.000000000 float4
World[1] x = 2.000000000, y = 6.000000000, z = 10.000000000, w = 0.000000000 float4
World[2] x = 3.000000000, y = 7.000000000, z = 11.000000000, w = 8.660300000 float4
I see 2 "errors":
First, my buffer is not taken as a colum major, otherwise World[0] should be like "1, 2, 3, 4"
Second, where are my "4", "8" and "12" left? They are skipped when the compiler parse the buffer into the matrix.
To fix this, I have to transpose() my buffer, and then declare
row_major float3x4 World;
This way everything works as expected