OpenGL Matrix issues

Started by
4 comments, last by haegarr 10 years, 1 month ago

Hello!

I'm writing an engine that uses opengl, and I'm want to implement an FPS style camera. I'm handling my own matrices. So each object has its own matrix. The matrix were implemented in a row-major order and using post-multiplication (I know, exactly the inverse of the opengl, but i need it somehow independent of the opengl "style" ).

My render function is like this:


void render() {
    glMatrixMode(GL_MODELVIEW);
    glViewport(0,0,width,height);
    
    glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

    glLoadIdentity();

    glColor3f( 1.f, 1.f, 0.f );
    for(unsigned int i = 0; i < scene->primitives.size(); i++)
    {
        Primitive* object = scene->primitives[i];
        
        glPushMatrix();
            vector3f pos = object->globalOrigin;
            glTranslatef(pos.coord[0],pos.coord[1],pos.coord[2]);
            glutSolidSphere(1, 20, 20);
        glPopMatrix();
    }

   Matrix4x4 modelview = scene->camera.getGLTransform();
    
    glMultMatrixf((GLfloat*)modelview.data);

    glFlush();
}

It basically draw a bunch of spheres. The function getGLTransform() just get the matrix from camera and transposes it.

From what i understand, for a fps style camera, i need first translate my world space in relation to the camera position, and then rotate it. So, i must apply the camera transformation last in my code, because opengl uses pre-multiplication and the transformations are applied from last to first. But the above code dont respond to my camera location and rotations.

If i apply the transformations from camera right after glLoadIdentity(), it respond to camera position and rotation, but it dont rotate like a fps system. It first rotate my world space, and then translate.

I already checked the matrices and they are okay.

I just cant see what I'm doing wrong, please help me.

Thanks in advance.

Advertisement

First of all, you're actually not handling your own matrices in much of this code. You're using glLoadIdentity/glPushMatrix/glPopMatrix/glTranslate - that's not handling your own, that's using the GL matrix stack.

Secondly, transforms specified by a matrix only apply to objects drawn after you specify those transforms. So your glMultMatrix call at the end is actually doing nothing. The correct sequence is to set a matrix, then draw an object. The object is drawn with the matrix you've just set applied. Setting a matrix after you draw an object has no effect on the object just drawn. Objects are drawn based on current state, and future state doesn't affect previously drawn objects.

Thirdly, your mixture of RM/CM and post/pre multiplication is a recipe for disaster. You'll find things a lot easier to understand and debug in the future (and you'll have cleaner code) if you pick one convention and stick to it. Mixing multiple conventions means that you are going to get things wrong at some point - this isn't "if", it's "when". And when it happens you'll need to disentangle your mess and hope you remember which convention is used (and which is expected) at each part of your code in order to troubleshoot. If that sounds horrible it's because it is. You say that you "need it somehow independent of the OpenGL style" which indicates to me that you don't fully understand what you're doing here.

And finally - that glFlush at the end of your render function? Have you got a single-buffered context? If so, get rid of it and create a proper double-buffered one (no, it's not more complex), replacing your glFlush with the appropriate SwapBuffers call (see your API or framework documentation for this).

Direct3D has need of instancing, but we do not. We have plenty of glVertexAttrib calls.

Thanks for your reply mhagain!

Your are right! I'm not handling my own matrices, i just define some. I'm implementing a Ray Tracing render, and the opengl is used only to visualize my kd-tree.

I choose to made it post-multiplication because, for me, it is more intuitive to have the transformations applied from first to last. So when i finish with the opengl part it will be easier.

Ok, i edited my code to the following:


void render() {
    glMatrixMode(GL_MODELVIEW);
    glViewport(0,0,width,height);
    
    glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
    
    glLoadIdentity();
    Matrix4x4 modelview = scene->camera.getGLTransform();

    glColor3f( 1.f, 1.f, 0.f );
    for(unsigned int i = 0; i < scene->primitives.size(); i++)
    {
        Primitive* object = scene->primitives[i];
        
        glPushMatrix();
            vector3f pos = object->globalOrigin;
            glTranslatef(pos.coord[0],pos.coord[1],pos.coord[2]);
            glMultMatrixf((GLfloat*)modelview.data);
            glutSolidSphere(1, 20, 20);
        glPopMatrix();
    }

}

But it still don't give me the fps style camera, is my concept of fps camera wrong? i mean, shouldn't i first translate and then rotate? I guess that my post-multiplication matrices are interfering in it, but i don't see how.

I thought that glFlush() was only to force the commands to be written in the current buffer, I'm actually using double buffer technique. The command to swap buffer is being called after the render function call, at my main loop.

This is my first opengl project, still have a lot to learn.


… The function getGLTransform() just get the matrix from camera and transposes it.

That would be wrong in general. You need to invert it (view matrix is inverse camera matrix), because transposing it is sufficient if and only if there is no translation of the camera involved. Maybe you use the transposed matrix here w.r.t. to the following, but that still unburdens you to compute the inverse.


I choose to made it post-multiplication because, for me, it is more intuitive to have the transformations applied from first to last.

Routines like glTranslate, glRotate, glMultMatrix (assuming that they mean the obsoleted standard immediate mode OpenGL routines) have their definition: They create a new transformation matrix and multiply them on the right side of what is currently on the stack. You cannot alter this behavior if you stick with those routines! Hence using this order


glTranslate( model_position );
glMultMatrix( view_matrix );

is wrong! Notice that if you want to reverse the order of transformations, you need to consider that

A * B == ( BT * AT )T

so that both involved matrices need to be transposed and the result is transposed. The gl* transform routines do not create those transposed matrices, so you get something wrong.

Hence: Get rid of the obsolete transform stuff. Use your own matrix library. Stick with a once chose convention (i.e. use row vectors or else column vectors), and change this only when crossing over to OpenGL (and if needed). And foremost, as mhagain has written: Lean how matrices work.

Thanks haegarr,

You are right, the view matrix should be the inverse not only the transpose. Unfortunately i dont have enough knowledge to keep this conversation, I'm so confused right now that dont even know what to ask :). Guess that i will try from the beggining with modern opengl and improve my own matrix library. Thanks for pointing me the errors guys! really appreciate it. Only one last favor, do you know any good resources to learn the modern opengl?

I suggest you not to switch all at once. None of the topics is easy, and you may get frustrated easily if nothing works and you don't know where to search. You may play with the obsolete immediate mode functions first to get a feeling what the order of transformations means. Equipped with that knowledge it is much easier to develop (and test) your own matrix library. After replacing all of the immediate mode functions with usages of your matrix library, and things still work, switching to modern OpenGL would be the last step.

This topic is closed to new replies.

Advertisement