[GLSL] pushing matrices onto the ModelView Matrix

Started by
9 comments, last by TheBuzzSaw 13 years, 11 months ago
In moving to OpenGL 3, all the transformation functions (glRotate, glTranslate, etc.) are deprecated. I am studying GLSL right now. I understand the principle of how shaders work. Vertex shaders are run on every vertex; fragment shaders are run on every pixel, essentially. I see that vertex shaders commonly end with this very logical step: gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex That is handy and all, but exactly how/where/when do I push matrices onto the ModelViewMatrix? How do I pop them? I am very comfortable creating my own matrices if I have to for all the needed transformations, but exactly how do I do it? Let's take a simple example. How would I rotate my scene 15 degrees? In the olden days, I simply called glRotatef(15.0f, 0.0f, 1.0f, 0.0f); where do I do that now?
Amateurs practice until they do it right.Professionals practice until they never do it wrong.
Advertisement
You have to calculate the matrices yourself. Either make your own matrix class or find a math library. When you have generated the matrix, send it to the shader as an uniform...
Yeah as stalef said you don't use gl_ModelViewProjection matrix anymore, you just build your own.

For your glRotate example, look up "Axis-Angle to Matrix Conversion" on google. This will teach you how to build a rotation matrix from an axis-angle pair, which is similar to the glRotate command. From this you can build your own matrix class, which you can give a function like "matrix.Rotate(angle,x,y,z)"

Or you can just find and use a matrix library, I'm sure you can find some good suggestions here though I don't have one.
[size=2]My Projects:
[size=2]Portfolio Map for Android - Free Visual Portfolio Tracker
[size=2]Electron Flux for Android - Free Puzzle/Logic Game
Yeah, I am familiar with all the specific matrices. I know how to build them. I'm just trying to understand exactly where to apply them.

Really? Matrices are no longer put into gl_ModelViewProjection? That clears up a great deal for me. I've been hunting high and low for how to push/pop matrices. So, for a given shader, it just needs to perform the calculations, stick the results into a matrix, and multiply it on the vertex directly?

My fear was that this would be inefficient. I figured it would make more sense to have a single point where matrices are multiplied into the MVP since all the vertices use it anyway.
Amateurs practice until they do it right.Professionals practice until they never do it wrong.
Quote:
it just needs to perform the calculations, stick the results into a matrix, and multiply it on the vertex directly

You'll build the matrix(ces) on CPU and then send it to the GPU via glUniform. You don't want to be building any matrices in your shader.

Quote:My fear was that this would be inefficient. I figured it would make more sense to have a single point where matrices are multiplied into the MVP since all the vertices use it anyway.


Its pretty much all up to you. If all you need is a MVP matrix then you compute that on CPU and send your shader just a MVP matrix, then each vertex_out = MVPMatrix * vertex in.

You can send any combination of matrices you want to the shader to make it as efficient as you want. In some special effect cases I've sent a modelview matrix, a view matrix, and a MVP matrix all to the same shader, and just used each where needed.
[size=2]My Projects:
[size=2]Portfolio Map for Android - Free Visual Portfolio Tracker
[size=2]Electron Flux for Android - Free Puzzle/Logic Game
You can use my lib if you want.
http://glhlib.sourceforge.net

Among other things, it does matrix calculations on the CPU. Instead of having glLoadIdenity, there is glhLoadIdentityf2
glhRotatef2
glhRotateAboutXf2
and many others.
Some of the function names end with SSE so they use SSE instructions.
Of course, it is still YOUR job to send the matrix to GL with a glUniform.
Sig: http://glhlib.sourceforge.net
an open source GLU replacement library. Much more modern than GLU.
float matrix[16], inverse_matrix[16];
glhLoadIdentityf2(matrix);
glhTranslatef2(matrix, 0.0, 0.0, 5.0);
glhRotateAboutXf2(matrix, angleInRadians);
glhScalef2(matrix, 1.0, 1.0, -1.0);
glhQuickInvertMatrixf2(matrix, inverse_matrix);
glUniformMatrix4fv(uniformLocation1, 1, FALSE, matrix);
glUniformMatrix4fv(uniformLocation2, 1, FALSE, inverse_matrix);
So, the end of my shader should factor in my own composite matrix that I pushed in?

gl_Position = gl_ModelViewProjectionMatrix * mySpecialMatrix * gl_Vertex

???
Amateurs practice until they do it right.Professionals practice until they never do it wrong.
gl_Position, gl_ModelViewProjectionMatrix, and gl_Vertex are all depreciated.

A core compatible shader would look something like this:
in vec4 in_vertex;out vec4 out_vertex;uniform mat4 myModelViewProjectionMatrix;main() {  out_vertex = myModelViewProjectionMatrix * in_vertex;}


Or:

in vec4 in_vertex;out vec4 out_vertex;uniform mat4 myModelMatrix;uniform mat4 myViewMatrix;uniform mat4 myProjectionMatrix;main() {  out_vertex = myProjectionMatrix*myViewMatrix*myModelMatrix* in_vertex;  other_stuff = myViewMatrix * other_stuff  etc..}
[size=2]My Projects:
[size=2]Portfolio Map for Android - Free Visual Portfolio Tracker
[size=2]Electron Flux for Android - Free Puzzle/Logic Game
Ahhh... That answers my question. Thank you!
Amateurs practice until they do it right.Professionals practice until they never do it wrong.
Actually my mistake, I think gl_Position is still valid, but I'm not 100% sure.

Use that instead of out_vertex (was confusing that with a more generic attribute)
[size=2]My Projects:
[size=2]Portfolio Map for Android - Free Visual Portfolio Tracker
[size=2]Electron Flux for Android - Free Puzzle/Logic Game

This topic is closed to new replies.

Advertisement