# OpenGL Matrices and the pipeline

## Recommended Posts

Hi, Well I'm after making some decent progress since I first took up learning OpenGL a few months ago. I can now use vertex arrays, vertex buffer objects, indexed geometry, texture mapping, fog, mipmapping, vertex/pixel shaders etc. etc.. However I feel as if I've skipped some of the basics, which I should really know inside out at this stage... My first question relates to the two matrices: the 'modelview' matrix and 'projection matrix'. In my 3D engine I only use the projection matrix to translate and rotate the map and all the game objects. This works fine, however I am unsure as why there is a second matrix (modelview) and what its purpose is. I have looked for some good explanations on the differences between the two matrices, but couldn't find anything decent to help explain them. So why do we need two transformation matrices ? What typical scenarios would you use each of the two matrices? Is the modelview matrix a transformation matrix used for transformations local to each object in a game (eg: monster / map item) and the projection matrix used for transforming the world (entire map geometry) ? Or is the modelview matrix simply an additional matrix which can recalled and used if needed ? Why use it? The second question regards the pipeline and viewing transforms. When I render each frame I first setup the projection matrix / rotations etc and then draw the map. I then push this matrix to save its status. The matrix is then transformed again by whatever position/rotation that each monster has, and the monster is then drawn. After drawing each monster I then pop the matrix to reload the previously saved state. Now.. If i remove the pop matrix line the entire world gets translated by whatever I translate the monster by. This brings me to some alarming conclusions.. When I am translating the monster, am I really translating everything else too? I thought once something was sent down the pipeline it stays in whatever position it was when it was first sent? Does this mean everytime I move a monster a whole bunch of unnecessary calculations are being peformed on the rest of the map? If so, how can I avoid such calculations? Thanks in advance for your help. I know this post is a bit of a pain [smile] but these are things I really need to get clear in my head before I go any further.

##### Share on other sites
Hi,

Whether you know it or not, you are using the modelview matrix. Search through your code - somewhere I'm sure you'll find the line glMatrixMode(GL_MODELVIEW).

Ok, so on to what the two matrices do. The modelview matrix performs an affine transformation; more practically, it sets the position and orientation of your camera and of objects in the scene in 3d space, and applies other linear transformations such as scale, reflection, and shear.

The projection matrix performs a different function: it converts the 3d world into 2d information that can be displayed on the screen. Variables include field of view, aspect ratio, and near and far clipping planes.

I'm not exactly sure how OpenGL implements this internally, but in any case you don't have to worry about doing 'too much transformation'. That's why matrix concatenation is useful; no matter how many transformations you combine, it still comes down to a single matrix-vector multiplication per vertex.

##### Share on other sites
projection matirx for the frustum and ortho things.
modelview matrix for translating/rotatting/scaling for the objects
texture matrix same as modelview matrix but for the textures GL_TEXTURE_1D/2D/3D
for more details read the red book

##### Share on other sites
mmm i am not sure about your second question but every push you should pop it or you'll face problem, because the matrix stack has a limit and every loop in your main function you'll push till it gets full .
bye

##### Share on other sites
Ah yes- now it makes more sense.. I can see the relationship between the two more clearly now. Good reply..

Quote:
 Original post by jykWhether you know it or not, you are using the modelview matrix. Search through your code - somewhere I'm sure you'll find the line glMatrixMode(GL_MODELVIEW).

Nope. All throughout my code the matrix used is GL_PROJECTION - even in my vertex shaders. I can see how it can be used to achieve the same effect though, since ultimately it controls how points are projected from world space onto the screen.

Ok, so that's one issue cleared up I think. Any more ideas on the other question?

##### Share on other sites
The answer to the second question is that, for rigid meshes, every vertex is transformed exactly once, with whatever matrix was loaded at the time the draw function is called. Basically, when you call glBegin() the driver makes a copy of the matrix at the top of the stack and sends that along with the vertices you specify, and whatever happens afterwards it preserves the illusion that the vertices are completely transformed & rendered with the current GL state before you go and muck with it.

So, the transformations are just numerical manipulations on the matrix only, and don't actually *move* anything until you draw something. It looks like a little crash-course in linear algebra would help you out a lot in understanding these concepts.

Tom

##### Share on other sites
Also, it is perfectly plausible to use the projection matrix only, as the two matrices are always concatenated prior to transformation. They are only separate because it can be handy to manipulate one and not the other. For instance:

You have a Projection matrix with the following transforms:
1. Frustum transform
2. Viewing (camera) transform

And a Model matrix with the following transforms:
3. World transform
4. Model transform
5. Sub-model transform (objects moving relative to parent objects, etc)

This is more or less the same as one Projection matrix with transforms 1-5, but say you wanted to draw mountains twice, once regularly and once reflected in water. To do the reflection, you want to flip the camera.

If you're using one matrix, you need to pop 5, 4, 3, and 2, specify a new camera transform, and then specify 3-5 again.

If you're using two matrices, you just pop 2 off the Projection and specify your new camera, and the Model matrix stays the same.

Tom

##### Share on other sites
Quote:
 Original post by ParadigmShiftThe answer to the second question is that, for rigid meshes, every vertex is transformed exactly once, with whatever matrix was loaded at the time the draw function is called. Basically, when you call glBegin() the driver makes a copy of the matrix at the top of the stack and sends that along with the vertices you specify, and whatever happens afterwards it preserves the illusion that the vertices are completely transformed & rendered with the current GL state before you go and muck with it.

Excellent. That was just what I needed to know.. At least I can be sure now that everything is being transformed once, and once only.

Quote:
 Original post by ParadigmShiftSo, the transformations are just numerical manipulations on the matrix only, and don't actually *move* anything until you draw something. It looks like a little crash-course in linear algebra would help you out a lot in understanding these concepts.Tom

I do a lot of stuff on algebra and matrices in college- just had my final year maths exam today in fact! [smile] Its not that I don't understand the maths involved in the rotations, I'm just unsure exactly what the hell OpenGL / display drivers are doing underneath the API to my geometry. However, you're post has helped clear that up so I am thankful for that.

Well thats great guys, I feel much more enlightened now! [wink] You have been most helpful so I think extra ratings are in order.

Cheers
-Darragh

##### Share on other sites
Hm, guess I was wrong about having to use the modelview matrix :-| However, see here (section 8.030) and here for discussion of the way OpenGL intends for the matrices to be used and the reasons for following these guidelines.

##### Share on other sites
Thanks for that. Those two articles are very informative- i'll keep them for future reference.

You've also managed to solve another long outstanding problem i had...

In the article about 'GL_PROJECTION' abuse (of which I am guilty [smile]), it also mentions the problems of using the projection matrix with fog. Now cast you're eye back to the thread which I started on that very subject.. I was unable to solve the problem (until now) and subsequently had to resort to learning GLSL and using pixel shaders in order to do my fogging. If only I had known about this back then!... [smile]

## Create an account

Register a new account

• ### Forum Statistics

• Total Topics
627764
• Total Posts
2978976
• ### Similar Content

• Hello! As an exercise for delving into modern OpenGL, I'm creating a simple .obj renderer. I want to support things like varying degrees of specularity, geometry opacity, things like that, on a per-material basis. Different materials can also have different textures. Basic .obj necessities. I've done this in old school OpenGL, but modern OpenGL has its own thing going on, and I'd like to conform as closely to the standards as possible so as to keep the program running correctly, and I'm hoping to avoid picking up bad habits this early on.
Reading around on the OpenGL Wiki, one tip in particular really stands out to me on this page:
For something like a renderer for .obj files, this sort of thing seems almost ideal, but according to the wiki, it's a bad idea. Interesting to note!
So, here's what the plan is so far as far as loading goes:
Set up a type for materials so that materials can be created and destroyed. They will contain things like diffuse color, diffuse texture, geometry opacity, and so on, for each material in the .mtl file. Since .obj files are conveniently split up by material, I can load different groups of vertices/normals/UVs and triangles into different blocks of data for different models. When it comes to the rendering, I get a bit lost. I can either:
Between drawing triangle groups, call glUseProgram to use a different shader for that particular geometry (so a unique shader just for the material that is shared by this triangle group). or
Between drawing triangle groups, call glUniform a few times to adjust different parameters within the "master shader", such as specularity, diffuse color, and geometry opacity. In both cases, I still have to call glBindTexture between drawing triangle groups in order to bind the diffuse texture used by the material, so there doesn't seem to be a way around having the CPU do *something* during the rendering process instead of letting the GPU do everything all at once.
The second option here seems less cluttered, however. There are less shaders to keep up with while one "master shader" handles it all. I don't have to duplicate any code or compile multiple shaders. Arguably, I could always have the shader program for each material be embedded in the material itself, and be auto-generated upon loading the material from the .mtl file. But this still leads to constantly calling glUseProgram, much more than is probably necessary in order to properly render the .obj. There seem to be a number of differing opinions on if it's okay to use hundreds of shaders or if it's best to just use tens of shaders.
So, ultimately, what is the "right" way to do this? Does using a "master shader" (or a few variants of one) bog down the system compared to using hundreds of shader programs each dedicated to their own corresponding materials? Keeping in mind that the "master shaders" would have to track these additional uniforms and potentially have numerous branches of ifs, it may be possible that the ifs will lead to additional and unnecessary processing. But would that more expensive than constantly calling glUseProgram to switch shaders, or storing the shaders to begin with?
With all these angles to consider, it's difficult to come to a conclusion. Both possible methods work, and both seem rather convenient for their own reasons, but which is the most performant? Please help this beginner/dummy understand. Thank you!

• I want to make professional java 3d game with server program and database,packet handling for multiplayer and client-server communicating,maps rendering,models,and stuffs Which aspect of java can I learn and where can I learn java Lwjgl OpenGL rendering Like minecraft and world of tanks

• A friend of mine and I are making a 2D game engine as a learning experience and to hopefully build upon the experience in the long run.

-What I'm using:
C++;. Since im learning this language while in college and its one of the popular language to make games with why not.     Visual Studios; Im using a windows so yea.     SDL or GLFW; was thinking about SDL since i do some research on it where it is catching my interest but i hear SDL is a huge package compared to GLFW, so i may do GLFW to start with as learning since i may get overwhelmed with SDL.
-Questions
Knowing what we want in the engine what should our main focus be in terms of learning. File managements, with headers, functions ect. How can i properly manage files with out confusing myself and my friend when sharing code. Alternative to Visual studios: My friend has a mac and cant properly use Vis studios, is there another alternative to it?

• Both functions are available since 3.0, and I'm currently using glMapBuffer(), which works fine.
But, I was wondering if anyone has experienced advantage in using glMapBufferRange(), which allows to specify the range of the mapped buffer. Could this be only a safety measure or does it improve performance?
Note: I'm not asking about glBufferSubData()/glBufferData. Those two are irrelevant in this case.
• By xhcao
Before using void glBindImageTexture(    GLuint unit, GLuint texture, GLint level, GLboolean layered, GLint layer, GLenum access, GLenum format), does need to make sure that texture is completeness.

• 11
• 10
• 10
• 23
• 14