[GLSL] pushing matrices onto the ModelView Matrix
In moving to OpenGL 3, all the transformation functions (glRotate, glTranslate, etc.) are deprecated. I am studying GLSL right now. I understand the principle of how shaders work. Vertex shaders are run on every vertex; fragment shaders are run on every pixel, essentially. I see that vertex shaders commonly end with this very logical step:
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex
That is handy and all, but exactly how/where/when do I push matrices onto the ModelViewMatrix? How do I pop them? I am very comfortable creating my own matrices if I have to for all the needed transformations, but exactly how do I do it?
Let's take a simple example. How would I rotate my scene 15 degrees? In the olden days, I simply called glRotatef(15.0f, 0.0f, 1.0f, 0.0f); where do I do that now?
You have to calculate the matrices yourself. Either make your own matrix class or find a math library. When you have generated the matrix, send it to the shader as an uniform...
Yeah as stalef said you don't use gl_ModelViewProjection matrix anymore, you just build your own.
For your glRotate example, look up "Axis-Angle to Matrix Conversion" on google. This will teach you how to build a rotation matrix from an axis-angle pair, which is similar to the glRotate command. From this you can build your own matrix class, which you can give a function like "matrix.Rotate(angle,x,y,z)"
Or you can just find and use a matrix library, I'm sure you can find some good suggestions here though I don't have one.
For your glRotate example, look up "Axis-Angle to Matrix Conversion" on google. This will teach you how to build a rotation matrix from an axis-angle pair, which is similar to the glRotate command. From this you can build your own matrix class, which you can give a function like "matrix.Rotate(angle,x,y,z)"
Or you can just find and use a matrix library, I'm sure you can find some good suggestions here though I don't have one.
Yeah, I am familiar with all the specific matrices. I know how to build them. I'm just trying to understand exactly where to apply them.
Really? Matrices are no longer put into gl_ModelViewProjection? That clears up a great deal for me. I've been hunting high and low for how to push/pop matrices. So, for a given shader, it just needs to perform the calculations, stick the results into a matrix, and multiply it on the vertex directly?
My fear was that this would be inefficient. I figured it would make more sense to have a single point where matrices are multiplied into the MVP since all the vertices use it anyway.
Really? Matrices are no longer put into gl_ModelViewProjection? That clears up a great deal for me. I've been hunting high and low for how to push/pop matrices. So, for a given shader, it just needs to perform the calculations, stick the results into a matrix, and multiply it on the vertex directly?
My fear was that this would be inefficient. I figured it would make more sense to have a single point where matrices are multiplied into the MVP since all the vertices use it anyway.
Quote:
it just needs to perform the calculations, stick the results into a matrix, and multiply it on the vertex directly
You'll build the matrix(ces) on CPU and then send it to the GPU via glUniform. You don't want to be building any matrices in your shader.
Quote:My fear was that this would be inefficient. I figured it would make more sense to have a single point where matrices are multiplied into the MVP since all the vertices use it anyway.
Its pretty much all up to you. If all you need is a MVP matrix then you compute that on CPU and send your shader just a MVP matrix, then each vertex_out = MVPMatrix * vertex in.
You can send any combination of matrices you want to the shader to make it as efficient as you want. In some special effect cases I've sent a modelview matrix, a view matrix, and a MVP matrix all to the same shader, and just used each where needed.
You can use my lib if you want.
http://glhlib.sourceforge.net
Among other things, it does matrix calculations on the CPU. Instead of having glLoadIdenity, there is glhLoadIdentityf2
glhRotatef2
glhRotateAboutXf2
and many others.
Some of the function names end with SSE so they use SSE instructions.
Of course, it is still YOUR job to send the matrix to GL with a glUniform.
http://glhlib.sourceforge.net
Among other things, it does matrix calculations on the CPU. Instead of having glLoadIdenity, there is glhLoadIdentityf2
glhRotatef2
glhRotateAboutXf2
and many others.
Some of the function names end with SSE so they use SSE instructions.
Of course, it is still YOUR job to send the matrix to GL with a glUniform.
So, the end of my shader should factor in my own composite matrix that I pushed in?
gl_Position = gl_ModelViewProjectionMatrix * mySpecialMatrix * gl_Vertex
???
gl_Position = gl_ModelViewProjectionMatrix * mySpecialMatrix * gl_Vertex
???
gl_Position, gl_ModelViewProjectionMatrix, and gl_Vertex are all depreciated.
A core compatible shader would look something like this:
Or:
A core compatible shader would look something like this:
in vec4 in_vertex;out vec4 out_vertex;uniform mat4 myModelViewProjectionMatrix;main() { out_vertex = myModelViewProjectionMatrix * in_vertex;}
Or:
in vec4 in_vertex;out vec4 out_vertex;uniform mat4 myModelMatrix;uniform mat4 myViewMatrix;uniform mat4 myProjectionMatrix;main() { out_vertex = myProjectionMatrix*myViewMatrix*myModelMatrix* in_vertex; other_stuff = myViewMatrix * other_stuff etc..}
Actually my mistake, I think gl_Position is still valid, but I'm not 100% sure.
Use that instead of out_vertex (was confusing that with a more generic attribute)
Use that instead of out_vertex (was confusing that with a more generic attribute)
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement