Archived

This topic is now archived and is closed to further replies.

Eber Kain

Getting the current modelview matrix transform

Recommended Posts

Eber Kain    130
You can use the OpenGL function glGetFloatv(GL_MODELVIEW_MATRIX,matrix); To get a copy of the current matrix position, doing so has to access video memory though. The only way Ive thought to do this without accessing video memory is to put all the matrix calculation stuff in my program that way I can keep a local copy of the current matrix, and I would just send the finished matrix to OpenGL. That also means that I would have to add a new set of push matrix and pop matrix functions, and there would have to be another matrix stack locally. Now if you look at it, what would be better? Accessing video memory? or Declareing variables for a new matrix stack, and makeing an extra function call for every translate and rotate to send the compiled matrix to OpenGL?

Share this post


Link to post
Share on other sites
alargeduck    122
remember that modern videocards will do transformations in hardware, speeding them up considerably over you doing them locally. matrix operations arent the cheapest around, and sending the matrix to opengl isnt the fastest either. I would think the hit is much greater than a read from video memory.

Share this post


Link to post
Share on other sites
Eber Kain    130
Thats what I say, but some people that ive talked this over with in the past say that accessing video memory is one of the worst things to do if you want to keep a fast program.

I know that I could never write matrix routines that are as optimized as the ones OpenGL uses.

Share this post


Link to post
Share on other sites
vincoof    514
That depends on when you need to use this matrix, and how many operations transform it.

For instance :

  /* CASE ONE */
/* Using you own matrices */

float my_matrix[4][4];

glMatrixMode(GL_MODELVIEW);
glLoadIdentity(); my_matrix = identityMatrix();
glTranslatef(-1.0f, 0.0f, 0.0f); my_matrix.translate(-1.0f, 0.0f, 0.0f);
glRotatef(120.0f, 0.0f, 1.0f, 0.0f); my_matrix.rotate(120.0f, 0.0f, 1.0f, 0.0f);
glScalef(1.0f, 2.0f, -1.0f); my_matrix.scale(1.0f, 2.0f, -1.0f);
glTranslatef(0.0f, -1.0f, 5.0f); my_matrix.translate(0.0f, -1.0f, 5.0f);

float point_local[4] = { 1.0f, 2.0f, 1.0f, 1.0f };
float point_world[4];

my_matrix.fromLocalToWorld(point_local, point_world);


  /* CASE ONE */
/* Using OpenGL matrices */


glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glTranslatef(-1.0f, 0.0f, 0.0f);
glRotatef(120.0f, 0.0f, 1.0f, 0.0f);
glScalef(1.0f, 2.0f, -1.0f);
glTranslatef(0.0f, -1.0f, 5.0f);

GLfloat opengl_matrix[16];
GLfloat point_local[4] = { 1.0f, 2.0f, 1.0f, 1.0f };
GLfloat point_world[4];

glGetFloatv(GL_MODELVIEW_MATRIX, opengl_matrix);
fromLocalToWorld(point_local, point_world, opengl_matrix);


  /* CASE TWO */
/* Using you own matrices */

float my_matrix[4][4];

glMatrixMode(GL_MODELVIEW);
glLoadIdentity(); my_matrix = identityMatrix();
glTranslatef(-1.0f, 0.0f, 0.0f); my_matrix.translate(-1.0f, 0.0f, 0.0f);

float point_local[4] = { 1.0f, 2.0f, 1.0f, 1.0f };
float point_world[4];

my_matrix.fromLocalToWorld(point_local, point_world);


  /* CASE TWO */
/* Using OpenGL matrices */


glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glTranslatef(-1.0f, 0.0f, 0.0f);

GLfloat opengl_matrix[16];
GLfloat point_local[4] = { 1.0f, 2.0f, 1.0f, 1.0f };
GLfloat point_world[4];

glGetFloatv(GL_MODELVIEW_MATRIX, opengl_matrix);
fromLocalToWorld(point_local, point_world, opengl_matrix);



In first case, it is slower to use your own matrices because you perform many software operations and you just use the final matrix. Because you don''t use the intermediate matrices (for instance, between rotate and scale) you don''t need to "know" intermediate matrices.

In second case, it is faster to use your own matrices because operations performed on it are minimal (only a load identity+translation) and I guess it would be slower to query the video memory.

So, my conclusion : if you have to transform your matrices alot, you gotta use OpenGL matrices ; and if you''re going to apply a few transformations, you gotta use your own matrices.

Share this post


Link to post
Share on other sites
RipTorn    722
I personally use my own matrix class... one I wrote myself, not based off of other matrix forms...
thus, it offeres a lot of advantages to other matrix forms...

for example, all the matrix types I''ve looked at, when rotating the matrix, would create a rotation matrix, and then multiply the original matrix by that matrix... using some 64 multiplications, and another 64 additions.
Not very fast.
So, I simply worked out which values would change and how...
I ended up using only 12 multiplies, and 6 adds.

Also, because of this, I can also do other things because I have access to the matrix data it''s self, for example, I can rotate the matrix relative to it''s current direction. Which is infinitly useful.

I assume it''s slower than a GL matrix, but I''d much rather have the useful extras like I''ve mentioned. And have instant access to it for rotating vectors, and such.

are D3D''s matricies hardware accelerated though?

Share this post


Link to post
Share on other sites