Shader Math Dyslexia

Started by
5 comments, last by EvilDecl81 19 years, 3 months ago
This could be another one for namethatnoonetook.... It is my understanding that D3DX and the FFP implements/uses matrix functions using the following convention: row1 = x-axis row2 = y-axis row3 = z-axis row4 = transformations Using this convention, position vectors must be pre-multiplied by matrices to get the proper transformation. For example: M = World Matrix V = position vector in object space V * M = V' where V' is vector in world space not M * V which equals nonesense. However, several different shader tutorials seem to bounce between pre- and post-matrix multiplication. For example, on the Microsft DirectX site under 'Columns - Driving DirectX': The article 'Programmable Shaders for DirectX 8.0' provides the following transformation example: dp4 r0.x, v0, c[0] dp4 r0.y, v0, c[1] dp4 r0.z, v0, c[2] dp4 r0.w, v0, c[3] which seems to illustrate (what I believe to be correct) a vector * matrix operation. While the article 'Introduction to the DirectX9 HIgh-Level SHader Language' provides the following transformation example: Out.Pos = mul(view_proj_matrix, vPosition); Which seems to illustrate (what I believe to be incorrect) a matrix * vector operation. Even if this is somehow the correct order of operations, it would seem that if we are post-multiplying the vector by the matrix then the Matrix would have to be a proj_view_matrix not a view_proj_matrix (assuming the name of the matrix specifies the order in which the combination occurred). Am I making some (or many) simple/fundamental mistake here that are throwing me off? Thanks for the help. Todd
Advertisement
No, you're not making a mistake. Direct3D uses row-major matrices (i.e. each row is a vector). However, it is more efficient to do transformations using column-major matrices when in shaders. Why?

A vector * matrix transformation is simply 4 dot products. The vector is dotted with the 4 columns of the matrix. Shaders can do dot products (dp4) on 2 registers. So, common sense says we should store the matrices in column-major in shaders, i.e. every register holds a column, so that transformations are done efficiently.

So, in order to use matrices with shaders, and still use dp4's to do transofrmations, you pass the transpose of your matrix. The ID3DXEffect interface automatically does this for you, if you use it.

If you try to compile some transformation code with your HLSL compiler, once with row-major packing and once with column-major, and examine the asm output, you'll notice that the compiler produces more instructions for the row-major packing. However, Robert Dunlop illustrates a way to do the transformation using row-major matrices at no additional cost here. Last time I checked the compiler, it didn't do it this way, IIRC.

EDIT: One more thing. Doing a transformation as dp4's, or better, as a m4x4, gives the driver a hint that this is a transformation, so it might do it faster (if it can). While if you use Dunlop's method, I don't know if that'd be possible.

Hey Coder,

Ok, I think I see where I was going wrong. Just to make sure, your saying that in both examples the transpose of the matrix was passed to the shader which means that both examples are correct because the second example is normal Matrix*vertex math while the first example is correct because it is not matrix*vertex math but a series of dot products which equate to the same thing done in example one.

One more tag-on question, is row-major the standard in game programming or is this just a microsoft convention? It seems like we could save a step if we used col-major in our applications.

Thanks for the help!
OpenGL is column major. Seems like Microsoft decided to go a different directino for unknown reasons to me and made direct X row major.
If I am not using the FFP and I don't use D3DX for matrix math, then is there any reason I shouldn't use column major in my applications when using the D3D API?

Thanks
Quote:Original post by tscott1213
Just to make sure, your saying that in both examples the transpose of the matrix was passed to the shader which means that both examples are correct because the second example is normal Matrix*vertex math while the first example is correct because it is not matrix*vertex math but a series of dot products which equate to the same thing done in example one.

We pass the transpose of the matrix *only* when using column-major calculations in the shader - this is the default. So whenever you set a matrix shader constant, you need to make sure it's column-major first (i.e. transpose it before you set it). If you use the effects framework, it does this for you.
On the other hand, if you used the method explained by Robert Dunlop, you don't need to transpose row-major matrices.

Quote:One more tag-on question, is row-major the standard in game programming or is this just a microsoft convention? It seems like we could save a step if we used col-major in our applications.

A lot of people find working with row-major matrices more intuitive (myself included). I don't exactly remember which field(s), but I think some scientific fields use them by convention.

If you'd like to work with column-major matrices, and you're not using the FFP, you can go on and do it [smile] (And if you're using the FFP, you'll just need to transpose them before you set them)

HLSL, by default uses column_major packing for matrices. This is for efficency reasons: 1) you can transform a position with only 3 instructions using dp4s(when you don't care about w) 2) the dp4s do not have read/write hazards

It can changed at a per matrix level by using the row_major or column_major usage modifier, e.g:

column_major float4x4 MyMatrix

Or, the default can be change via either a command line switch or the following #pragma

#pragma pack_matrix(row_major)
EvilDecl81

This topic is closed to new replies.

Advertisement