D3D11 and multiplication order in the GPU

Started by
3 comments, last by Buckeye 9 years, 11 months ago

I'll getting into D3D11, and I faithfully transpose my matrices before sending them to the shader. I use "left-to-right" multiplication order for vector-matrix and matrix-matrix operations in the GPU - e.g., "matrix viewProj = mul( View, Proj );" and "Pos = mul( input.Pos, World );"

That confuses me just a bit as I seem to using left-to-right order in both CPU and GPU, i.e., the same order for row-major (CPU) and column-major (GPU) matrices.

Does D3D11 by default set flags or otherwise manipulate the GPU so that happens?

Please don't PM me with questions. Post them in the forums for everyone's benefit, and I can embarrass myself publicly.

You don't forget how to play when you grow old; you grow old when you forget how to play.

Advertisement

Not sure I understand the question exactly... but mul(vector, matrix) is always multiplied like a single-row vector dot the columns of the matrix.. while mul(matrix, vector) is each row of the matrix dot the column vector (ofcourse, as it's how matrix multiplication works). So mul(vector, matrix) == mul(T(matrix), vector).

Then for multiplication order.. mul(vector, matrix1 * matrix2) == mul(T(matrix2) * T(matrix1), vector).

Whether the driver then somehow behind the scenes rearranges that to fit it's preferred memory layout I don't know but it won't matter for the calculation itself.

D3D, by default, uploads shader constants "transposed" - it actually assumes that things are in column major order, unless you specify row_major.

It is just a convention that they use, which can be modified with a compilation flag for your shader. See the details here.

I would also recommend getting very familiar with the matrix classes you are using, and having a solid understanding of how they map to traditional matrix math operations. Depending on the classes that you use, they may implement operations differently, so there is no guaranteed way to say that using one multiplication order will function in the right way...

Spend a day or two making some experiments in a unit testing framework (VS comes with a simple one out of the box) that you can refer to and see how the matrix operations are working and what order you need to apply them in - it will be well worth doing so, both while you are doing it and in the future when you run in to a special case!


Not sure I understand the question exactly

I probably wasn't clear that, having gotten into D3D11 recently, I hadn't previously gotten "under the hood" with regard to actually transferring data to the shader. In DX9, I had used row-major matrices with left-to-right multiplication order, shaders written in left-to-right fashion, and things worked. As osmanb implies, the DX9 effect I was using likely "hid" the transposing. Now that I have to do my own transposing, I had noticed row-major left-to-right on the CPU (D3DX/xna/DirectXMath) vs column-major left-to-right on the GPU and was curious. D3D11 is closer to the hardware and it's a learning process.

Cross-posted: Jason Z:

I would also recommend getting very familiar with the matrix classes you are using

Oh, I am now! Just converted my animated mesh loader/animation controller classes from D3DX to XNA math. Now that I transpose the animation mats before I stuff them into my cbSkin buffer, works just ducky! smile.png

I salute you all!

[attachment=21372:D11 engine 13.png]

Please don't PM me with questions. Post them in the forums for everyone's benefit, and I can embarrass myself publicly.

You don't forget how to play when you grow old; you grow old when you forget how to play.

This topic is closed to new replies.

Advertisement