Using transform matricies vs directly changing position

Started by
3 comments, last by JohnBolton 17 years, 11 months ago
Is it ideal to work with matrices for everything, or would it be better to actually change the position of objects in the world. It seems to me that having to apply a matrix everytime you change something or needed to know the world position of something would cause unnecessary overhead.
//------------------------------------------------------------------------------------------------------The great logician Bertrand Russell once claimed that he could prove anything if given that 1+1=1. So one day, some fool asked him, "Ok. Prove that you're the Pope." He thought for a while and proclaimed, "I am one. The Pope is one. Therefore, the Pope and I are one."
Advertisement
The really big "no no" against manually changing the position is that you incur resource locking/modification overheads. An efficient Direct3D program will not create or modify any resources unless it absolutely has to...

Also, its not clear from your post what context you're talking about. The GPU will only (depending on your shader/setup) do one matrix multiplication - it always does this and it should be constant time - so having or not having a world matrix is largely irrelevant [smile]

If you're doing CPU-based transforms (why?) then it might well be a different story.

hth
Jack

<hr align="left" width="25%" />
Jack Hoxley <small>[</small><small> Forum FAQ | Revised FAQ | MVP Profile | Developer Journal ]</small>

What I had in mind was just basic movement:

vPos += (1, 1, 1)

vs

MatrixTransform(world, (1, 1, 1))

But having to apply a seperate transformation for every object in the world. It seems to me the first way is 3n operations, and the second is 16n. Then if you wanted to know the actual position for collision detection or something, you'd have to extract it out of the matrix.
//------------------------------------------------------------------------------------------------------The great logician Bertrand Russell once claimed that he could prove anything if given that 1+1=1. So one day, some fool asked him, "Ok. Prove that you're the Pope." He thought for a while and proclaimed, "I am one. The Pope is one. Therefore, the Pope and I are one."

I'd suggest staying away from touching the vertex positions, at least in the case you are descriping.

First, you are forgetting that adding a vector to every position of a mesh of 10000 vertices you'll do like 30000 float additions which can't be very efficient compared to 1 matrix concenation.

Second, you'll need to upload the data to GPU.

Third, you'll need to do anyway at least one matrix vector multiplication in the shader (or fixed function pipeline) to get the clip space coordinates (since you keep your data in clip space).

---

About collision detection, usually you'll present your collision geometry with simple triangle meshes or basic primitives such as boxes and cylinder or spheres etc. For ray-triangle tests for example you can transform a ray from world space to object space and do the ray-triangle test in that space instead of having a world space version of the positions.

I'd suggest using matrices
It depends on the situation. If an object has no orientation and only needs a vector for its position, there isn't much benefit to using a matrix for its position or using a matrix to move it.

However, sometimes you might be willing to sacrifice a little bit of performance for better organization/design or even for convenience. Sometimes, it might even be faster to use a matrix if the code or data is set up for matrices and you have to move data around or take extra steps to get the position.
John BoltonLocomotive Games (THQ)Current Project: Destroy All Humans (Wii). IN STORES NOW!

This topic is closed to new replies.

Advertisement