Storing Matrices and Multiplication

Started by
4 comments, last by haegarr 12 years, 3 months ago
I'm working with OpenGL ES 2.0, so I have to pass my matrices into shaders like regular uniforms instead of like regular OpenGL. I'm storing my matrices in a column-major structure like so:


m[ 0] m[ 4] m[ 8] m[12]
m[ 1] m[ 5] m[ 9] m[13]
m[ 2] m[ 6] m[10] m[14]
m[ 3] m[ 7] m[11] m[15]


When I multiply my matrices together, I multiply the left matrix as row vectors, and the right matrix as column vectors. Therefore, I transform my vertices like so:
out_vertex = projectionMatrix * viewMatrix * modelMatrix * in_vertex


The OpenGL tutorials I've seen online seem to multiply all the matrices backwards like so:
out_vertex = modelMatrix * viewMatrix * projectionMatrix * in_vertex


When I try multiplying my matrices together where the model matrix is first (see second example), then my vertices don't appear onscreen. When I multiply them together my way (backwards from example 2), then the vertices are transformed correctly.


What seems to be the issue here?
Advertisement
you are using column-major so your formula is
out_vertex = projectionMatrix * viewMatrix * modelMatrix * in_vertex

in tutorial they are using row-major matrices but formula is probably wrong
out_vertex = modelMatrix * viewMatrix * projectionMatrix * in_vertex
in row major calculations it should be:
out_vertex = in_vertex * modelMatrix * viewMatrix * projectionMatrix

When you look on some OpenGL 1 example it use matrix stack instead of direct multiplication. This stack is FILO so matrices will be multiplied in reverse order

Stay with your calculations, they are correct

you are using column-major so your formula is
out_vertex = modelMatrix * viewMatrix * projectionMatrix * in_vertex

in tutorial they are using row-major matrices but formula is probably wrong
out_vertex = modelMatrix * viewMatrix * projectionMatrix * in_vertex
in row major calculations it should be:
out_vertex = in_vertex * modelMatrix * viewMatrix * projectionMatrix

When you look on some OpenGL 1 example it use matrix stack instead of direct multiplication. This stack is FILO so matrices will be multiplied in reverse order

Stay with your calculations, they are correct


I think your first one should be
out_vertex = projectionMatrix * viewMatrix * modelMatrix* in_vertex;

and furthermore, you can just simplify that into 1 matrix (it results in less operations on the GPU)
out_vertex = MyBigMatrix* in_vertex;

but it is up to you.
Sig: http://glhlib.sourceforge.net
an open source GLU replacement library. Much more modern than GLU.
float matrix[16], inverse_matrix[16];
glhLoadIdentityf2(matrix);
glhTranslatef2(matrix, 0.0, 0.0, 5.0);
glhRotateAboutXf2(matrix, angleInRadians);
glhScalef2(matrix, 1.0, 1.0, -1.0);
glhQuickInvertMatrixf2(matrix, inverse_matrix);
glUniformMatrix4fv(uniformLocation1, 1, FALSE, matrix);
glUniformMatrix4fv(uniformLocation2, 1, FALSE, inverse_matrix);

[quote name='SaTANO' timestamp='1326013625' post='4900584']
you are using column-major so your formula is
out_vertex = modelMatrix * viewMatrix * projectionMatrix * in_vertex

in tutorial they are using row-major matrices but formula is probably wrong
out_vertex = modelMatrix * viewMatrix * projectionMatrix * in_vertex
in row major calculations it should be:
out_vertex = in_vertex * modelMatrix * viewMatrix * projectionMatrix

When you look on some OpenGL 1 example it use matrix stack instead of direct multiplication. This stack is FILO so matrices will be multiplied in reverse order

Stay with your calculations, they are correct


I think your first one should be
out_vertex = projectionMatrix * viewMatrix * modelMatrix* in_vertex;

and furthermore, you can just simplify that into 1 matrix (it results in less operations on the GPU)
out_vertex = MyBigMatrix* in_vertex;

but it is up to you.
[/quote]

YES You are right I just paste same thing twice :D
out_vertex = projectionMatrix * viewMatrix * modelMatrix* in_vertex;

But as I write
Stay with your calculations, they are correct
I see, so it's been doing it as LIFO. I wish I checked that. Btw, I do combine all matrices into 1 transform matrix on the CPU, then upload that one for operations. However, I'm finding that my reflection shader isn't working correctly.

Here's what I've come up with out of looking at various shaders online:

// vertex shader

uniform mat3 u_normalMat;
uniform mat4 u_pvmMat;

attribute vec4 a_position;
attribute vec3 a_normal;

varying vec3 v_position;
varying vec3 v_worldNormal;

void main()
{
vec4 pos = a_position;
v_position = pos.xyz / pos.w;
v_worldNormal = u_normalMat * a_normal;
gl_Position = u_pvmMat * pos;
}




// fragment shader

precision lowp float;

uniform vec3 u_eyePos;
uniform samplerCube u_envMap;

varying vec3 v_position;
varying vec3 v_worldNormal;

void main()
{
vec3 eye = normalize(u_eyePos - v_position);
vec3 ray = reflect(eye, v_worldNormal);
gl_FragColor = textureCube(u_envMap, ray);
}


When I rotate my model, I don't think my normals or the camera position aren't being transformed correctly because it looks like I'm getting incorrect results. Right now, my camera position is in world space and the normals are also transformed into world space using u_normalMat in the shader. u_normalMat is an inverse and transposed version of my object's MODEL transform matrix.

I'm also getting similar results when I don't transform my normals, and transform my camera's position by the inverse of the object's MODEL matrix.

Also, in my implementation, I keep the MODEL and VIEW matrices separate since I'm targeting OpenGL ES 2.0 in case there's any confusion there.
Just for clarification:

Matrix multiplication is always done as a product of rows from the left matrix and columns from the right matrix. This implies that if you want to write a vector (i.e. a matrix with extent 1 in one of its both dimensions) on the right of a matrix it must be a column vector (i.e. 1 column with the number of rows matching the number of columns of the matrix), and similarly on the left of a matrix it must be a row vector (i.e. 1 row with the number of columns matching the number of rows of the matrix).

Whether one linearizes the 2D arrangement of matrix co-efficients in row-major or else in column-major order when storing it in computer memory, is a totally other thing. E.g. you can store a matrix in row-major order although using column vectors. Hence mentioning the term "column-major" isn't a justification whether a vector has to appear on the left or on the right. Moreover, as soon as using the transpose operator, vectors may appear on any side in principle.

Ordering the matrices is done w.r.t. which space the transformations are given. Lets us assume we're using a column vector [sub]l[/sub]v for a vertex position, and that the vertex position is given w.r.t. the model local space. We have a matrix [sub]w[/sub]M[sub]l[/sub] that transforms from model local space into world space. We have a matrix [sub]v[/sub]M[sub]w[/sub] that transforms from world space into view space (a.k.a. camera local space). What we obviously can do is transform the vertex position from model local to world to view space, i.e. we apply [sub]w[/sub]M[sub]l[/sub] first, and apply [sub]v[/sub]M[sub]w[/sub] on the result. Application means multiplication on the left because we've chosen to use column vectors here. Hence:
v[sub]v[/sub] := [sub]v[/sub]M[sub]w[/sub] * ( [sub]w[/sub]M[sub]l[/sub] * [sub]l[/sub]v ) = [sub]v[/sub]M[sub]w[/sub] * [sub]w[/sub]M[sub]l[/sub] * [sub]l[/sub]v = ( [sub]v[/sub]M[sub]w[/sub] * [sub]w[/sub]M[sub]l[/sub] ) * [sub]l[/sub]v = [sub]v[/sub]M[sub]l[/sub] * [sub]l[/sub]v where [sub]v[/sub]M[sub]l[/sub] := [sub]v[/sub]M[sub]w[/sub] * [sub]w[/sub]M[sub]l[/sub]

BTW: I've chosen that somewhat uncommon indexing to show the transition from one space into the other.

This topic is closed to new replies.

Advertisement