why we use matrix and only multiplication as operator in transforms?

Started by
13 comments, last by Steven Ford 5 years, 7 months ago

as i search, it seems all computer transforms even movement are don by matrix multiplication as i can be done by a simple add or subtract. is there any reason for it?

Advertisement

A matrix can "compress" info such as "Transform", "Scale", "Rotation", "Projection" into it. Actually, a matrix contains information which modified a 3D object along with its base axis. It is just a math tool like "Sine" or "Cosine" when you calculate angles in triangles.

 

Are you asking why we only use matrix multiplication (and why not something else), or why we can use matrix multiplication at all? If the first question, it's because it's consistent: you can write any linear transformation as a matrix, and you can concatenate multiple transformations simply by multiplying their matrices. It is convenient. That may also be the answer to the other question.

Transformations in general cannot be done by simple add or subtract. This may be true only for translation, but for rotation you need some sines and cosines.

Also it should be mentioned (as addition to what others said before me) that although it may look like a waste of computer power to do 4x4 matrix - 1x4 vector multiplications, actually it's much better than transforming every vertex "manually" by doing the sin/cos calculations. Don't forget that when you need to transform (rotate, translate, scale) a 3D object, you actually need to apply the same transformation to ALL of its vertices. So you create one transformation matrix (I'm now talking only about the world matrix) and then you just use it for every vertex. So you make the expensive sin and cos calculations only few times during matrix creation and then you are doing just thousands of vector-matrix multiplications, which is only multiplication and addition (much faster operations and the GPUs are optimized for this stuff).

First of all, the transmation can be represented easily with matrix operation.

Second, we all need some kind of transformation data structure anyway.

So, think of it like just an object that hold transformation information that you can easily manipulated. It's the same to the user (given that we have transformation operation fuction available), it's easier to the implementor (it's just the matrix operation). Then ... why not ?

http://9tawan.net/en/

17 hours ago, moeen k said:

is there any reason for it?

Only reason for it  - in common case source matrix can have a projection(any of 3 first elements of last coumn is nonzero) and global scaling (w!=1)  data. In this case just addition of translation vector to 4th row will make incorrect results. So to implement translation by addition of translation vector to 4-th row you have to make warranty that source matrix don't have a projection data. To handle a global scaling without projection it required to multiply translation vector by source matrix w component before addition.

#define if(a) if((a) && rand()%100)

I think it's basically because it makes it easy to chain a bunch of operations and the end result is s single matrix you can use to transform everything.  That being said I ended up implementing some flags in my matrix library that allow some cheats that take advantage of the fact that I know a matrix is in a certain state. I'm not really sure if it's worth it these day,  but it's old code and I still use it because it works and I've been too busy to analyze and refactor it. 

There are two important optimization opportunities:

  • Consolidating any number of transforms into a single matrix, saving computational effort compared to cheaper but repeated calculations. Anything can be transformed with a matrix-vector multiplication.
  • Hardware accelerated matrix-matrix and matrix-vector multiplication is worth implementing. The special case transforms in your library are highly likely to cost more than the general case on the GPU.

Omae Wa Mou Shindeiru

25 minutes ago, LorenzoGatti said:

The special case transforms in your library are highly likely to cost more than the general case on the GPU.

My experience is the exact opposite: Using built in matrix types results in slower execution and higher register usage. All current GPUs are scalar - there is no matrix or vector acceleration.

For me it works better to do matrix math myself using dot products, and doing only the necessary operations of course. Also, mostly quaternions are faster for rotations even if they do more ALU. Quat + vector for translation is faster than matrix4, etc... worth to try out. Don't trust stuff just because it's built in into shading language.

Further to all of the very valid points up above, even if you had a use-case which could be expressed in a different way, by always using matrices to do these operations then you have a single code path and hence your code-base will be simpler to maintain.

The fact that you can then combine 55 operations together to form a single matrix and then apply that matrix to many thousand of objects is then icing on the cake!

This topic is closed to new replies.

Advertisement