Thinking in OpenGL

Started by
15 comments, last by Zakwayda 13 years, 11 months ago
Hi,

I would recommend that you use this phenomenon sparingly. In OpenGL (and as you know, if you've made games), any other graphics API, it's best to keep track of an object's position and orientation yourself.

So, while you could define an object's movement as a series of translations and rotations, it's much easier to translate to the location, and then rotate to the proper orientation.

I have, however, used this technique for simple things (e.g., cause an object to "orbit" a point with a rotation*translation*rotation, instead of a translation*rotation).

Evidently, this new OpenGL 3 thing doesn't do hardware matrices, so for learning OpenGL, I'd recommend just getting used to translating and then rotating.

Cheers,
-G

[size="1"]And a Unix user said rm -rf *.* and all was null and void...|There's no place like 127.0.0.1|The Application "Programmer" has unexpectedly quit. An error of type A.M. has occurred.
[size="2"]

Advertisement
I'm back to bend your ears again if I may. My queries really are more math related now though so perhaps a moderator might want to move this to the appropriate forum?

Anyway, I have updated my project to utilize transformation matrices manually in place of glRotate and glTranslate so I'd like to ask if I'm on the right track.

I wrote my own simple vector and matrix classes and I'm pretty sure they are correct. I know that's a bit of a leap of faith but I understand the way matrices work and that opengl matrices are column major etc... To calculate the transform matrix for my model I do this:

  Matrix4 xRot = MathUtil::rotateX(rotation.x*PIdiv180);  Matrix4 yRot = MathUtil::rotateY(rotation.y*PIdiv180);  Matrix4 zRot = MathUtil::rotateZ(rotation.z*PIdiv180);  Matrix4 trans = MathUtil::translate(position.x, position.y, position.z);  transform = (yRot * zRot * xRot) * trans;


The MathUtil functions return the appropriate rotation and translation matrices.

My render function now looks like this:
    glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);    glPushMatrix();    glLoadIdentity();    // move "camera" back to view the model    glTranslatef(0,0,-600);    //  this now updates the model position and calculates its transform matrix    craft->Update();    //  apply the model transform matrix    glMultMatrixf(craft->transform.elements);    // draw the vertices    craft->Draw();    glPopMatrix();    glutSwapBuffers();


This results in the rotation working correctly as long as the model has not been translated. Then the vertices are just warped all over the screen. I'm kind of confused as to how the translation should work.

If forward movement is applied to the model, I increment its position.y (since it faces along its own y-axis. Then this position vector is fed into the MathUtil to obtain the translation matrix as above.

Here is how I generate the translation matrix:
Matrix4 MathUtil::translate(const float x, const float y, const float z){  GLfloat res[] = {    1,0,0,x, // actually *column* 1 because opengl is column major    0,1,0,y,    0,0,1,z,    0,0,0,1  };    return Matrix4(res);  }


I know its not the most elegant way, but for now my matrix class simply encapsulates an array of GLfloats and overloads the operators.

Can you see anything drastically wrong with my approach?

Cheers!
Quote:Can you see anything drastically wrong with my approach?
I see a few potential problems:
Quote:To calculate the transform matrix for my model I do this:

*** Source Snippet Removed ***
This:
transform = (yRot * zRot * xRot) * trans;
Would be correct for row vectors. If you're using column vectors, the order will need to be reversed.
Quote:This results in the rotation working correctly as long as the model has not been translated. Then the vertices are just warped all over the screen. I'm kind of confused as to how the translation should work.
I think this is probably because your matrices are set up incorrectly (see below).
Quote:If forward movement is applied to the model, I increment its position.y (since it faces along its own y-axis. Then this position vector is fed into the MathUtil to obtain the translation matrix as above.
Moving 'forward' by incrementing the y value is incorrect (what I wrote about this earlier still applies - read my earlier posts for more info).
Quote:Here is how I generate the translation matrix:
*** Source Snippet Removed ***
Your matrices appear to be set up incorrectly. What you've shown above would work for either a row-vector/column-major setup or a column-vector/row-major setup, but OpenGL expects either a row-vector/row-major setup or a column-vector/column-major setup (in other words, OpenGL expects the elements of a given basis vector of the matrix to be contiguous in memory).
Ok I'm REALLY confused about the "forward" movement issue then. I thought that in model space I am always moving along the Y axis so this is the only component of the position vector that I change. But this is then converted into world coordinates via the transformation matrix - which is a translation matrix constructed based on the local position, then multiplied by the rotation matrices to produce an absolute transform matrix to be fed to opengl.

Considering your post, I know my above reasoning is wrong and I can't quite get my head around it even re-reading your previous advice. I'm against spoon feeding as much as the next person, but maybe in this case you could give me an example to help me understand the concept - if I want to move the model in the direction it is facing, how should I alter the member position vector?

My matrices are set up such that the first column is:
element[0], element[1], element[2], element[3]

and the first row is:
element[0], element[4], element[8], element[12]
and so on...

Therefore in a translation matrix, the xyz magnitude components would take the positions [3], [7], [11] respectively. Is this what you refer to when you say opengl expects column major matrices?

Your comment regarding vectors/matrices has thrown me off a little, as I don't carry out any manual vector manipulation. I simply feed the vector components into my matrix generation functions as needed, multiply the matrices to get my compound transform, then apply this via glMultMatrixf before drawing the vertices.

As I type this, I'm getting the idea that every time I update the position of the model, I don't merely increment the y component, but I must multiply the position vector by some other velocity vector... but how exactly to I construct that "other" vector... :)

Thanks a lot for your time and patience. I hope I'm not asking any completely stupid questions here! The more I ask, the more I'm feeling I'd be better moved to the beginners section!
Quote:My matrices are set up such that the first column is:
element[0], element[1], element[2], element[3]

and the first row is:
element[0], element[4], element[8], element[12]
and so on...

Therefore in a translation matrix, the xyz magnitude components would take the positions [3], [7], [11] respectively. Is this what you refer to when you say opengl expects column major matrices?
This setup is wrong (for the OpenGL fixed-function pipeline, at least). The translation should reside in elements 12, 13, and 14, not 3, 7, and 11.

Basically, your matrices should be the transpose of what they are currently.
Quote:Your comment regarding vectors/matrices has thrown me off a little, as I don't carry out any manual vector manipulation. I simply feed the vector components into my matrix generation functions as needed, multiply the matrices to get my compound transform, then apply this via glMultMatrixf before drawing the vertices.
Whether or not you're performing any matrix-vector multiplications explicitly in your code, you still need to get your conventions set up in a way that makes sense.

Firstly, it doesn't matter if you use row vectors or column vectors - you can use either. The only requirement is that the 'majorness' of your matrices match the vector notation convention; that is, if you're going to use row vectors your matrices need to be row major, and if you're going to use column vectors your matrices need to be column major. (Currently your matrices are swapped around - they're either row-basis matrices with column-major ordering or vice versa, depending on how you look at it.)

Once you've settled on a vector notation convention (row or column vectors), you need to make sure that your transform matrices and matrix multiplication order reflect this convention correctly.

With row vectors, transforms should be built with basis vectors in the rows of the matrix; with column vectors, the basis vectors should be in the columns of the matrix.

Furthermore, with row vectors, the matrix product A*B applies the associated transforms in the order A->B, while with column vectors, the transforms are applied in the order B->A. Any expressions involving matrix multiplication will need to be ordered accordingly.

These topics are a frequent source of confusion for many, so don't hesitate to ask for clarification if you need it. You might also try searching the forum archives and/or internet as a whole for, say, 'row column major vector', and see what you find. Unfortunately though, confusion regarding these topics is so widespread that a good deal of what you find will be wrong anyway, so it's probably best to ask for any needed clarification here as well :)
Quote:Ok I'm REALLY confused about the "forward" movement issue then. I thought that in model space I am always moving along the Y axis so this is the only component of the position vector that I change. But this is then converted into world coordinates via the transformation matrix - which is a translation matrix constructed based on the local position, then multiplied by the rotation matrices to produce an absolute transform matrix to be fed to opengl.
I think I understand the source of your confusion. When you build the transform for your object, the rotation and translation transforms are combined in the order rotation->translation. However, the translation transform is not modified when these two transforms are combined; in other words, it's not 'adjusted' to 'match' the rotation in any way. If the input translation is (1,2,3), then after the combined transform is built, the translation is still (1,2,3). So, the translation has to be correct and in world/parent space from the outset. Whatever you set the translation to, that is where the object will be in world space. (Sorry if I'm being a little redundant, but I'm just trying to provide a few different ways of looking at it :)
Quote:if I want to move the model in the direction it is facing, how should I alter the member position vector?
As follows:

1. Build the rotation transform matrix for the object.

2. The rows or columns (depending on whether you're using row or column vectors) of this matrix are the direction vectors for the object. Extract the forward direction vector (which sounds like it's the y axis in your case), and store it in vector form.

3. Add this vector (most likely scaled by speed and time step) to your position vector.
Quote:As I type this, I'm getting the idea that every time I update the position of the model, I don't merely increment the y component, but I must multiply the position vector by some other velocity vector... but how exactly to I construct that "other" vector... :)
You don't multiply the vectors, but rather add them (as described above). Here's some example pseudocode:
matrix33 m = get_rotation_matrix();vector3 forward(m(0,1), m(1,1), m(2,1));position += forward * speed * time_step;


[Edited by - jyk on May 4, 2010 10:42:46 PM]
Once again thanks for a very well explained answer. I have managed to get it working (I admit through 80% understanding of what's going on and 20% trial and error) and for what it's worth it turns out my forward vector (y) ends up in matrix elements [4],[5] and [6]. As far as I can gather, this is what you would expect from a column major rotation matrix?

My next obstacle is to work out why my "pitch" is still relative to the world: if the craft is facing at right angles to the world "up", then the pitch control becomes roll, and facing -z pitch-up raises the nose whilst facing +z lowers the nose. Though yaw and forward movement is working correctly.

Interesting stuff. And thanks again, your continued help is much appreciated!
Quote:I have managed to get it working (I admit through 80% understanding of what's going on and 20% trial and error) and for what it's worth it turns out my forward vector (y) ends up in matrix elements [4],[5] and [6]. As far as I can gather, this is what you would expect from a column major rotation matrix?
That is what you would expect for either a column-major matrix intended for use with column vectors, or a row-major matrix intended for use with row vectors. For the other two configurations (row major/column vector and column major/row vector), the y axis would be in elements [1], [5], and [9].

But yes, [4], [5], and [6] is what you want for OpenGL.
Quote:My next obstacle is to work out why my "pitch" is still relative to the world: if the craft is facing at right angles to the world "up", then the pitch control becomes roll, and facing -z pitch-up raises the nose whilst facing +z lowers the nose. Though yaw and forward movement is working correctly.
Not sure about that one (not without seeing the code at least), but it sounds like either a) you just have your Euler-angle order wrong, or b) you need to ditch Euler angles and store your orientation in matrix form instead (which is what you'll need to do if you're trying to implement full 6DOF motion).
Quote:And thanks again, your continued help is much appreciated!
No problem :)

This topic is closed to new replies.

Advertisement