Decoupling model and view matrices

Started by
13 comments, last by Zouflain 14 years, 8 months ago
I realized my lighting solution was inaccurate, but for a long time I couldn't see why. It wasn't until I learned more about the different spaces relevant to OpenGL that I understood that I was failing to convert between spaces. Unfortunately, OpenGL seems to skip over the required matrices to achieve the desired space I need. I have one set of coordinates in object space and another in world space. Converting the world space coordinates to object space can't be done without the model matrix (actually, it's inverse), and converting the world space coordinates to eyespace cant be done without the view matrix. From what I can see, openGL only provides the ModelView matrix, not the model or view matrix. How can I decouple the two to get just the model or just the view matrix?
Advertisement
You can easily do that using Shaders (GLSL).
Just store the model and view matrices separately in your application.

Then whenever you change either of them, multiply them together and send the result to GL. This way GL still has its model-view as usual, but you've also got access to the current model and view matrices separately.
Quote:Original post by HuntsMan
You can easily do that using Shaders (GLSL).
Not according to the documentation. How would you do this? Merely saying that you can without actually giving any more than that isn't exactly helpful.

@Hodgman I don't have the model and view matrices, so I can't store them separately. That's really the problem. Are you saying that I should recalculate them, despite the fact that OpenGL must have already done this calculation and must, by necessity, have them stored somewhere?

All of my rotations are handled by quaternion. I convert to axis angle and pass this to openGL via gluLookAt and glRotate. Recalculating from that would be pretty slow, considering how much trig would be involved (and already is).
hi,

i have made a program (with source code) which use world matrix, view matrix and projection matrix like in DirectX (my anim8or file viewer):
http://texel3d.free.fr/projets/liban8/index.html

it depend on GLGX (my math library, very close to D3DX):
http://texel3d.free.fr/projets/glgx/index.html

You need to use GLSL and a math library to use your own matrix. Do not use OpenGL function to create and handle matrices, this function are deprecated in OpenGL 3.1 and higher (Read the specs of OpenGL 3.1). If you start to write a new program it should be a good idea to remove all deprecated function of the Opengl 3.1 specs (even if you just use OpenGL 2.1 because a lot of functions of OpenGL 3.1 are already available in OpenGL 2.1).

Note: GLGX is not optimized with SSE instructions. Maybe you could find a better library if you need great performances (maybe GLM: http://www.g-truc.net)
I've only just gotten comfortable with 2.1, and don't intend to change to 3.1 for quite some time. I would much rather not upset my learning by dumping what I know for some new system, but thank you for the suggestion anyway. I'll look into the source, but as most people have very different logical approaches to code, it would probably just be more helpful if you were to explain what you did.
_ You can use Quaternion, translation... for your computation. And at the end, you create or update your world matrix (using a quternion to matrix function,...).
_ You store and update your view matrix somewhere (the same as the one from glulookat).
_ You store and update your projection matrix

When you render your scene, you send the 3 matrices to your GLSL shader.
In the GLSL shader, the final position of your vertex is something like:
vertexPos = inputVertexPos * WorldMatrix*ViewMatrix*ProjectionMatrix

But because you have always acces to you world,view and projection matrix in your C++ application And in your GLSL shader, you can compute what ever you want in any space (in the C++ application and/or in GLSL).
nothing is stopping you from doing it in opengl 2.1, i.e. ogl3 is not required

as HuntsMan said use shaders be they glsl or cg

eg something like

-- vertex shader --
uniform mat44 viewmatrix;
uniform mat44 mmodelmatrix;
uniform mat44 projectionmatrix;

void main(void)
{
glPosition = projectionmatrix * viewmatrix * mmodelmatrix * vertex; // or whatever
}

Note:
Do not use The gl_* default variables of OpenGL in your GLSL shader. Create your own to store and send your matrices.
Quote:Original post by zedz
eg something like...
The problem isn't that I don't know how to pass uniforms, the problem is I don't know how to aquire the matrices to be passed. Thank you for the explanation, but I can't pass a mat4 to the shader without the mat4 in the first place, which is what I'm asking.

@texel3D I understand getting a vertex from object space to eye space and clip space, that much is easy and pretty much handles itself. I'm not worried about clip space, and would only use eye space if necessary. I need two coordinates to share the same space, whatever space that may be - the actual difference between spaces, or the matrices involved in calculating that space don't matter - what matters is that I have all the variables I need to convert, which I don't. Like I said, I have one coordinate in world space and another in object space. Without the inverse of the model matrix or the view matrix, I can't get them both into the same space.

I can glGet the ModelView matrix, but I don't need the Model*View matrix, I need the model or the view matrix, and glGet has no parameter like "GL_MODEL_MATRIX." Is the only way to get this value to recalculate it, even though it must, by necessity, already be stored somewhere in openGL?

This topic is closed to new replies.

Advertisement