Jump to content
  • Advertisement
Sign in to follow this  
scarypajamas

OpenGL Using Custom Matrix Stack with GL 2.x

This topic is 2540 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I'm trying to use a custom matrix stack with my OpenGL 2.x application. I know I can still use the OpenGL matrix stack in this version, but I'd rather not for forward compatibility.

The thing is, I can process all my translations, rotations, scaling, and modelview correctly using my own stack, however, when I multiply by my projection, everything breaks.

The funny thing is, using the same projection and transformations, I can get it to work if I DON'T multiply my modelview by my projection, and instead, use glLoadMatrix(myProjection) to set my projection.

Why might this be? I'm not using shaders to transform my vertices, I'm simply transforming them myself on the CPU. I want to be able to do this: (projection * modelview * vertex), but I can't, I have to first set the projection with glLoadMatrix() and then, on the CPU, do my (modelview * vertex).

Unless your using shaders, are you required to set an initial matrix using GL calls?

Just to be clear, when I attempt to do everything on the CPU, I make zero calls to push/pop/glTranslate/glScale/GL_MODELVIEW/etc...
When I use my workaround by calling glLoadMatrix(myFrustrum) this is the only call I make to the GL stack, I never call any other stack function.

Share this post


Link to post
Share on other sites
Advertisement
I'm not sure I understand. Are you using shaders or not?
If you are using fixed function, are you setting up the projection matrix or you leave it with the default identity matrix? Because the lighting won't work right if you are leaving the projection as identity. Also, there is the matter of whether you are submitting xyz or xyzw for your vertices.

So upload your projection matrix and upload your modelview matrix.

Or upload your projection matrix and set the modelview to identity and transform all your vertices by the modelview matrix on the CPU. However, why would you want to do that?

Share this post


Link to post
Share on other sites
Sorry if I wasn't clear. All I'm trying to do is avoid using the OpenGL matrix stack and instead use my own. I'm trying to do this without shaders.

Basically, what I want to do is to iterate over all my vertices and for each vertice, multiply it by my modelview and my projection matrices. This isn't working though. The only way I can get it to work is to set my projection with glLoadMatrix and then iterate and multiply each vertice by my modelview matrix.

I just want to do everything on the CPU, without shaders, without glLoadMatrix, without glFrustrum, without gluPerspective, or any other deprecated function.

Also, I'm currently leaving the projection as identity (however, I'm not using lighting, so I don't think it's an issue yet). I'm submitting my vertices as xyz and not xyzw.

Is what I'm trying to do impossible?

Share this post


Link to post
Share on other sites
No, that is totally possible. There must be bug somewhere in code, or you are using OpenGL API incorrectly. You must submit xyzw, because after multiplying vertex with projection matrix its w coordinate isn't 1 anymore.

Share this post


Link to post
Share on other sites

You must submit xyzw, because after multiplying vertex with projection matrix its w coordinate isn't 1 anymore.

This is true if projection matrix is not orthogonal.

Share this post


Link to post
Share on other sites
Thanks for your swift responses guys. My projection is not orthogonal and submitting my vertices in xyzw form makes no difference.

The only calls I'm making to OpenGL are:

At start up


glHint(GL_PERSPECTIVE_CORRECTION_HINT, GL_NICEST);
glClearDepth(1.0f);
glEnable(GL_DEPTH_TEST);
glDepthFunc(GL_LEQUAL);
glEnable(GL_TEXTURE_2D);
glEnable(GL_CULL_FACE);


Before rendering


glClearColor(1, 1, 1, 1);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);



For rendering, I'm using
glDrawElements

Really, that is it. Except for texture loading and binding, I make no calls to shaders or the GL matrix stack. Again, if I load my projection matrix with glLoadMatrix each time before rendering, it works. But trying to multiply it manually fails. Though, I think my modelview matrix is correct. When I load my projection with glLoadMatrix, I can still multiply my modelview with each vertex manually and it will display correctly.

Just to make extra sure I'm not doing anything really dumb, it should process each vertex like this (in psudocode):


foreach(Vertex v in myVerticesList) do
Vertex newVertex = (myProjection * myModelView) * v;
outputThisVertex( newVertex );
end



If you guys think, in theory, that everything should be OK, then your probably right - I have a bug somewhere else in my code, probably with my math library.

Share this post


Link to post
Share on other sites
Any particular reason why you must transform on the CPU? You're really only making things more difficult for yourself (not to mention hurting your program's performance).

Share this post


Link to post
Share on other sites
I'm developing an app for the iPhone and I'm trying to support the OpenGL ES 1.x pipeline. Eventually, I'll add shader support via the 2.x pipeline. Keeping track of my own transformation stack now will make the transition to 2.x much smoother.

When it comes to optimizing for the iPhone, you want to limit your calls to OpenGL, thus, you do as much processing on the CPU as you can. I'm also using batch rendering, a texture atlas, and interleaved arrays.

In addition, the number of vertices I'm processing per-frame is very low, as in less then 1000.

Share this post


Link to post
Share on other sites
Thing is though, the standard matrix stack functions already run on the CPU anyway, and even if you do transform on the CPU your geometry will still transform on the GPU too, so you're seriously better off using the API as designed rather than trying to second-guess it's behaviour.  If nothing else, getting it working with the standard matrix stack will give you a baseline to build from, and that you know already works, when it comes time to implement your own stack.

Share this post


Link to post
Share on other sites
Well there's no matrix stack in OpenGL ES or iOS, so there's really no point leaning on it except as an intermediate step.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!