Jump to content
  • Advertisement
Sign in to follow this  
Waaayoff

[HLSL] mul(WorldViewProj, position)?

This topic is 2550 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Ok so this is really bugging me. In every shader tutorial they transform the position like this:

mul(position, WorldViewProj);

That doesn't work for me and the models get all messed up. If i switch it to mul(WorldViewProj, position), it works.

But why didn't it work before?

Share this post


Link to post
Share on other sites
Advertisement

Ok so this is really bugging me. In every shader tutorial they transform the position like this:

mul(position, WorldViewProj);

That doesn't work for me and the models get all messed up. If i switch it to mul(WorldViewProj, position), it works.

But why didn't it work before?


Are you sure you multiplied your WorldViewProj matrix correctly?
Does position.w == 1?

Share this post


Link to post
Share on other sites

Is your WVP-Matrix column-major? When you are using D3DX-functions you have to transpose your matrices before sending them to shaders.


Thanks, that was it :)

Share this post


Link to post
Share on other sites
This is a combination of several factors related to linear algebra, and matrix multiplication in general. Hopefully I will be able to explain.

In linear algebra, vectors and matrices are multiplied using the standard matrix multiplication algorithm. Thus there are a few rules concerning the order of operations and "shape" of the matrices involved. Mathematicians usually treat vectors as matrices containing a single column of elements, with a translation multiplication looking something like this:

[ 0, 0, 0, tx] [ x]
[ 0, 0, 0, ty] *[ y]
[ 0, 0, 0, tz] [ z]
[ 0, 0, 0, 1] [ 1]


First note that matrix multiplication produces a result of a specific row/column configuration according to this simple rule: AxB * BxC = AxC. In other words, a matrix of size A rows and B columns multiplied by a matrix of B rows and C columns will produce a matrix of A rows and C columns. Also, in order to be properly multiplied, B must be equal for both. In this case, we have 4x4 * 4x1, which produces a 4x1, or another column vector. If we changed the order of multiplication, it would be 4x1 * 4x4, which would be illegal.

However, computer scientists often treat vectors as a matrix with a single row. There are several reasons for this, but often because a single row represents a single linear chunk of memory, or a single dimensional array, since arrays are typically addressed as array[row][column]. In order to avoid using 2 dimensional arrays in code, people simple use "row vectors" instead. Thus, in order to achieve the desired result using matrix multiplication, we swap the order to be 1x4 * 4x4 = 1x4, or vector * matrix:

[ x, y, z, 1] * [ 0, 0, 0, 0]
[ 0, 0, 0, 0]
[ 0, 0, 0, 0]
[ x, y, z, 1]


Notice how the x, y, z, elements of the translation matrix had to be moved in order to preserve the proper result for multiplication (in this case, it is transposed).

When using column vectors, the typical transform order of operations are P* V * W * v, because the column vector must come last to produce the proper result. Remember, matrix multiplications are associated, not commutative, so in order to achieve the appropriate result of vector transformed by world, transformed into view space, transformed into homogenous screen space, we must multiply in that order. This gives us (using associativity) P * (V * (W * v)), thus working from inner parens to outer parens, world transformation first, view next, projection next.

If we use row vectors, then the multiplication is as follows: v * W * V * P. Using associativity, we realize it is simply the same order of operations: ((v * W) * V) * P. Or world first, then view, then projection.

Both forms of multiplication are equally valid, and the DX library chooses to use the latter because it matches memory layout patterns, and it allows you to read your transformation order from left to right.

HLSL supports BOTH orders of operations. The "*" operator performs a simple element by element scaling, it does not perform matrix multiplication. That is performed using the "mul()" intrinsic operation. If you pass a 4 element vector as the first parameter to the mul() intrinsic function, it assumes you wish to treat it as a "row vector." Thus, you must supply matrices that have been multiplied in the proper order, and supplied using the proper row/column format. This is the default behavior when passing in matrices from the DX libraries using the DX effect parameters. If you supply the 4 element vector as the second parameter to the mul() intrinsic, it treats it as a column vector, and you must provide properly formed and multiplied matrices for column vectors.

I hope some of this made sense, otherwise it should have given you enough keywords to google a better explanation.

Share this post


Link to post
Share on other sites

Is your WVP-Matrix column-major? When you are using D3DX-functions you have to transpose your matrices before sending them to shaders.


Unfortunately, this answer doesn't tell the whole story, and by itself is not correct. While it is true that this will fix his particular issue, it will only cloud the issue should he use a different library for math, different library for loading shader parameters, or any other number of variables. It is best to understand why the matrix operations produce the results that they do, and how to preserve this understanding across the application -> shader program border.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!