Quote:Original post by juxieQuote:Original post by jyk
I've heard people say that, but I'm not exactly sure what they mean by it, and in any case I would argue that it's not really accurate (OpenGL and DirectX deal with transforms in basically the same way).
I think what people are referring to here is the difference in notational convention between the two APIs (row-vector vs. column-vector notation) and the implications for multiplication order.
When using the DirectX math library this is directly evident, but in OpenGL everything happens 'under the hood'. It could probably be argued that OpenGL itself doesn't really assume a notational convention; rather, it is simply the case that transforms are applied in the opposite of the order in which the corresponding function calls appear in the code. (Most OpenGL references use column-vector notation, however, so this is how people tend to think of things when working with OpenGL transform functions.)
It's all a bit confusing, but I think the first thing you need to understand is that OpenGL and D3D/DirectX are fundamentally the same in terms of how they deal with transforms. I say 'fundamentally' because there are a number of superficial differences - for example, D3D maintains separate world and model matrices, while OpenGL combines them into one - but the concepts are essentially the same.
I feel bad that I am still pretty much confused.
I tried to read up on quite a number of article, it seems to touch on simpler transformation.
I am not sure when I did the transformation correct or when I did it incorrectly.
Anywhere I can read up on this?
Thanks.
Do you have a math program like octave installed?
I would suggest you to do some transformations manually and to examining the results to get an intuition how they behave.