Using transform matricies vs directly changing position
Is it ideal to work with matrices for everything, or would it be better to actually change the position of objects in the world. It seems to me that having to apply a matrix everytime you change something or needed to know the world position of something would cause unnecessary overhead.
The really big "no no" against manually changing the position is that you incur resource locking/modification overheads. An efficient Direct3D program will not create or modify any resources unless it absolutely has to...
Also, its not clear from your post what context you're talking about. The GPU will only (depending on your shader/setup) do one matrix multiplication - it always does this and it should be constant time - so having or not having a world matrix is largely irrelevant [smile]
If you're doing CPU-based transforms (why?) then it might well be a different story.
hth
Jack
Also, its not clear from your post what context you're talking about. The GPU will only (depending on your shader/setup) do one matrix multiplication - it always does this and it should be constant time - so having or not having a world matrix is largely irrelevant [smile]
If you're doing CPU-based transforms (why?) then it might well be a different story.
hth
Jack
What I had in mind was just basic movement:
vPos += (1, 1, 1)
vs
MatrixTransform(world, (1, 1, 1))
But having to apply a seperate transformation for every object in the world. It seems to me the first way is 3n operations, and the second is 16n. Then if you wanted to know the actual position for collision detection or something, you'd have to extract it out of the matrix.
vPos += (1, 1, 1)
vs
MatrixTransform(world, (1, 1, 1))
But having to apply a seperate transformation for every object in the world. It seems to me the first way is 3n operations, and the second is 16n. Then if you wanted to know the actual position for collision detection or something, you'd have to extract it out of the matrix.
I'd suggest staying away from touching the vertex positions, at least in the case you are descriping.
First, you are forgetting that adding a vector to every position of a mesh of 10000 vertices you'll do like 30000 float additions which can't be very efficient compared to 1 matrix concenation.
Second, you'll need to upload the data to GPU.
Third, you'll need to do anyway at least one matrix vector multiplication in the shader (or fixed function pipeline) to get the clip space coordinates (since you keep your data in clip space).
---
About collision detection, usually you'll present your collision geometry with simple triangle meshes or basic primitives such as boxes and cylinder or spheres etc. For ray-triangle tests for example you can transform a ray from world space to object space and do the ray-triangle test in that space instead of having a world space version of the positions.
I'd suggest using matrices
It depends on the situation. If an object has no orientation and only needs a vector for its position, there isn't much benefit to using a matrix for its position or using a matrix to move it.
However, sometimes you might be willing to sacrifice a little bit of performance for better organization/design or even for convenience. Sometimes, it might even be faster to use a matrix if the code or data is set up for matrices and you have to move data around or take extra steps to get the position.
However, sometimes you might be willing to sacrifice a little bit of performance for better organization/design or even for convenience. Sometimes, it might even be faster to use a matrix if the code or data is set up for matrices and you have to move data around or take extra steps to get the position.
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement