• Create Account

Banner advertising on our site currently available from just \$5!

# GLSL and using GLM

Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

3 replies to this topic

### #1JeKbe  Members   -  Reputation: 100

Like
0Likes
Like

Posted 09 January 2012 - 07:43 AM

I have started using GLM math library. But i have the following problems. Why can't i do the matrix multiplication in the C++ code?
There seems to be a difference between the matrix multiplication in the C++ code and how GLSL do it. I do not transpose my matrices when i load them with GLSL.

The projection and view matrices are taken directly from glm::perspective and glm::lookAt the model is just a identity matrix. So i can't see how it can go wrong.

I stubmled upon this as i was trying to do some shadow projection on a surface. It worked fine when i passed in the shadow matrix as a uniform together with the other 3 matrices. But as soon as i did the multiplication of the shadow unto the model it stopped working. My shadow matrix is correct as far as i can see from testing in matlab, it do rely on perspective divison though done in the shader. But even the following simple matrix operations seem to fail me for some reason.

Works. Parse in different matrices as uniforms together with the vertex as a vec3.
//GLSL
gl_Position = projection * view * model * vec4(vertex, 1.0);


Don't work. Parse in a combined matrix as a single uniform together with a vec3.
//C++
mat4 combined = projection * view * model;

//GLSL
gl_Position = combined * vec4(vertex, 1.0);


Works. Do all the work in C++ and parse the vertex as a vec4 setting it directly in the shader.
//C++
mat4 combined = projection * view * model;
vec4 vertex = combined * v;

//GLSL
gl_Position = vertex;


Thanks for any help.

Im aware that OpenGL is column major and C++ row major. But should GLM not take care of this?

Edited by JeKbe, 09 January 2012 - 07:46 AM.

### #2mrjones  Members   -  Reputation: 612

Like
0Likes
Like

Posted 09 January 2012 - 10:47 AM

I stumbled upon something similar. Still not sure why and it is probably an error somewhere else in my code, but I got my own code to work only by transposing my modelview matrix before multiplying with current OpenGL matrix:
glm::mat4 mv;
// Do any matrix operations with mv from translate and rotate to scale ...
mv=glm::transpose(mv);
glMultMatrixf(glm::value_ptr(mv));

Seemed strange to me as well especially since documentation said that glm has same convention as OpenGL, but nothing else didn't seem to work.

### #3JeKbe  Members   -  Reputation: 100

Like
0Likes
Like

Posted 09 January 2012 - 02:16 PM

mrjones ill try that. I went away from the immediate mode and implemented my own shader which worked. Attempting to use glMultMatrix with my shaodw matrix didn't work either.
But worked when i parsed it in as a uniform to the shader

But i still don't understand what is wrong.

Damn silly mistake i figured it out. It appears i completly missunderstood how glUniform works. I thought it was like the vertex arrays/buffers you use for color and vertices, normals etc. Like you only specify them once in some init code. But ofcourse your matrices needs updating everytime you change them. So inserting a few glUniform into my render loop sorted it out. Everything works as i would expect now

### #4mrjones  Members   -  Reputation: 612

Like
0Likes
Like

Posted 10 January 2012 - 03:15 AM

Glad you got it to work It's something I've shot myself in the foot as well, but it didn't occur to me this time.

Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

PARTNERS