# OpenGL OpenGl Matrix Problem

This topic is 3314 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

Hi all. I am fairly new to game programming and recently decided to try to include 3d models in a game engine that I'm working on. I bought a good book on the subject by Evan Pipho. I had good success with the md2 model format and now I am trying to work with the Ms3D format so I can use skeletal animation. I am able to load the model and draw it in it's original pose but have had very little success animating the model(it turns into mush). Evan's demo works perfectly so I've been using it as a basis for comparison and I've track down the difference between his code and mind...but that has given me greater confusion. I'm programming the engine in OpenGL and designed a Vector class and Matrix class to allow for manipulation of verts as did he..and there is where the differences have come up. He seems to store the translational data for the matrix in the bottom row, but the translation matrix in the back of the Red Book show the translation matrix as storing the translational data in the last column instead. Using those specs I am able to multiply a rotational and transitional matrix together(much in the was DirectX seems too) and then transform a vector in exactly the way I'd expect. However the resulting matrices I am getting do not match his. This is obviously due to the location of the translational data. I'm really stumped and although I have narrowed the problem down I cannot seem to understand why he placed his transitional data here and I believe that understanding this is key to fixing the problem. I'd be happy to post source code, but there is so much that its impractical for me to post it all. This is my first post here on Gamedev so if anyone has suggestions I'd really appreciate it!

##### Share on other sites
Sound slike your problem is about how matrix and vectors works. OpenGL uses a notation with column vectors and column major matrices. This means a vectors are represented as 4x1 matrices (4 tall, 1 wide). Matrices being column major means that, when stored in linear memory, the 2D array is flattened column by column.

Now, if the book instead uses a notation of row vectors and row major matrices, everything will look different on paper; vectors are now 1x4 (1 tall, 4 wide) instead, and the matrices will look transposed compared to what OpenGL uses. The transpose will effectively put translation in the bottom row instead of the right column (as in OpenGL).

However, once you start working out how matrix multiplication works, and how the matrices are flattened to linear memory, you will see that everything actually end up exactly the same. If you get different result, you are most likely reading something wrong or the book is explaining something wrong.

##### Share on other sites
Well his code doesn't seem to quite line up with the book ideas. The custom matrix class that he creates seems to use row-major matrices for the rotational portion but column-major for transitional data. More specifically here is my code for setting transitional and rotational information:

void Matrix4X4::SetTranslate(float x, float y, float z){	m[3] = x;	m[7] = y;	m[11] = z;	}void Matrix4X4::SetRotation(float x,float y,float z){		float cosX = cosf(x);		float cosY = cosf(y);		float sinX = sinf(x);		float sinY = sinf(y);		float cosZ = cosf(z);		float sinZ = sinf(z);		float cosXsinY = cosX * sinY;		float sinXsinY = sinX * sinY;		m[0] = cosY * cosZ;		m[1] = -cosZ * sinZ;		m[2] = -sinY;		m[4] = -sinXsinY * cosZ + cosX * sinZ;		m[5] = sinXsinY * sinZ + cosX * cosZ;		m[6] = -sinX * cosY;		m[8] = cosXsinY * cosZ + sinX * sinZ;		m[9] = -cosXsinY * sinZ + sinX * cosZ;		m[10] = cosX * cosY;		m[15] = 1.0f;}and his code:inline void CMatrix4X4::SetRotation(float fX, float fY, float fZ){	double cx = cos(fX);	double sx = sin(fX);	double cy = cos(fY);	double sy = sin(fY);	double cz = cos(fZ);	double sz = sin(fZ);	m_fMat[0] = (float)(cy * cz);	m_fMat[1] = (float)(cy * sz);	m_fMat[2] = (float)(-sy);	m_fMat[4] = (float)(sx * sy * cz - cx * sz);	m_fMat[5] = (float)(sx * sy * sz + cx * cz);	m_fMat[6] = (float)(sx * cy);	m_fMat[8] = (float)(cx * sy * cz + sx * sz);	m_fMat[9] = (float)(cx * sy * sz - sx * cz);	m_fMat[10] = (float)(cx * cy);	m_fMat[15] = 1.0f;}inline void CMatrix4X4::SetTranslation(float fX, float fY, float fZ){	m_fMat[12] = fX;	m_fMat[13] = fY;	m_fMat[14] = fZ;}

also, for his matrix multiplication he multiplies them backwards. I thought for awhile that these matrices were simply transposed from each other, but that isn't quite true from what I can tell. Its important to note that I'm not using these matrices directly in OpenGl. I'm not feeding them to the matrix stack at all, but just using them to edit vertex data directly.

[Edited by - enotirab on January 20, 2009 7:41:59 PM]

##### Share on other sites
Having the translation in elements 3, 7 and 11 is unusual. Not that it's wrong, but it implies that you have different vector ordering (row or column vectors) and matrix majorness (row or column major matrix).

As long as you are consistent, everything will be correct. You just have to be very careful that you define all operations correct, and since you're getting wrong result, I believe that you're not defining your operations (multiplication specifically) correct with respect to vector ordering and sides you multiply from.

##### Share on other sites
That must be the case. The reason I placed them there is because much of the literature that I've read seems to place them there. The back of the Red Book, and also this site

When I transform hard-coded verts by multiplying them I SEEM to get the correct result, but my matrices do not match his and the model verts are definitely not coming out correctly. I'm unsure of why this is inconsistent. Its likely that you are correct and I am misunderstanding the way the vector transformations should occur.

Here is my code for it:
Matrix4X4 Matrix4X4::operator*(const Matrix4X4 rhs){  Matrix4X4 temp(	  //first row	  m[0]*rhs.m[0] + m[1]*rhs.m[4] + m[2]*rhs.m[8] + m[3]*rhs.m[12], m[0]*rhs.m[1] + m[1]*rhs.m[5] + m[2]*rhs.m[9] + m[3]*rhs.m[13],	             m[0]*rhs.m[2] + m[1]*rhs.m[6] + m[2]*rhs.m[10]+m[3]*rhs.m[14], m[0]*rhs.m[3] + m[1]*rhs.m[7] + m[2]*rhs.m[11] + m[3]*rhs.m[15],      //second row	  m[4]*rhs.m[0] + m[5]*rhs.m[4] + m[6]*rhs.m[8] + m[7]*rhs.m[12], m[4]*rhs.m[1] + m[5]*rhs.m[5] + m[6]*rhs.m[9] + m[7]*rhs.m[13],	             m[4]*rhs.m[2] + m[5]*rhs.m[6] + m[6]*rhs.m[10]+m[7]*rhs.m[14], m[4]*rhs.m[3] + m[5]*rhs.m[7] + m[6]*rhs.m[11] + m[7]*rhs.m[15],      //third row	  m[8]*rhs.m[0] + m[9]*rhs.m[4] + m[10]*rhs.m[8] + m[11]*rhs.m[12], m[8]*rhs.m[1] + m[9]*rhs.m[5] + m[10]*rhs.m[9] + m[11]*rhs.m[13],	             m[8]*rhs.m[2] + m[9]*rhs.m[6] + m[10]*rhs.m[10]+m[11]*rhs.m[14], m[8]*rhs.m[3] + m[9]*rhs.m[7] + m[10]*rhs.m[11] + m[11]*rhs.m[15],      //fourth row	  m[12]*rhs.m[0] + m[13]*rhs.m[4] + m[14]*rhs.m[8] + m[15]*rhs.m[12], m[12]*rhs.m[1] + m[13]*rhs.m[5] + m[14]*rhs.m[9] + m[15]*rhs.m[13],	             m[12]*rhs.m[2] + m[13]*rhs.m[6] + m[14]*rhs.m[10]+m[15]*rhs.m[14], m[12]*rhs.m[3] + m[13]*rhs.m[7] + m[14]*rhs.m[11] + m[15]*rhs.m[15]);       return temp;}void Vector::Transform(const Matrix4X4& mat) {	 float nx,ny,nz;	 	 nx = mat.Get(0,0) * v[vX] + mat.Get(1,0) *v[vY] + mat.Get(2,0) *v[vZ] + mat.Get(3,0); 	 ny = mat.Get(0,1) *v[vX]+ mat.Get(1,1) *v[vY] + mat.Get(2,1) *v[vZ] + mat.Get(3,1);      nz = mat.Get(0,2) *v[vX]+ mat.Get(1,2) *v[vY] + mat.Get(2,2) *v[vZ] + mat.Get(3,2); 	 v[vX] = nx;	 v[vY] = ny;	 v[vZ] = nz; }

Basically I'm just confused as to why the Red Book has a translation matrix
like this;

1 0 0 dx
0 1 0 dy (which is how mine is meant to be set up.
0 0 1 dz
0 0 0 1

and other matrix classes I have read about seem to be using a translation matrix like this:

1 0 0 0
0 1 0 0
0 0 1 0
dx dy dyz 1

I do understand that you have two options of transforming vectors. As a row from one side of multiplication, or as a column on the other. I'm using(I belive) a column style, so it should look like this:

1 0 0 dx * vx
0 1 0 dy * vy
0 0 1 dz * vz
0 0 0 1 * 1

Please forgive me if I seem like I'm being obtuse....I must be missing something and since this is a key graphical programming issue I really want to fix the error in my logic.

[Edited by - enotirab on January 20, 2009 3:33:22 PM]

##### Share on other sites
Quote:
 Original post by enotirabBasically I'm just confused as to why the Red Book has a translation matrix like this;1 0 0 dx0 1 0 dy (which is how mine is meant to be set up.0 0 1 dz0 0 0 1

It is the correct notation in OpenGL.

However, keep in mind how OpenGL stores these 16 elements of the matrix into an 1-dimensional array. That is, the vertical elements (columns) are stored first, then move on the next column.

The problem may be that your matrix class stores 16 elements in row-major order; storing the first row elements, then moving to the next line, which is common for math and C++.

I do not know the original intention for OpenGL's way in first place, however, as a result, you can find very interesting fact; 3 consecutive elements in an array represent a meaningful set.

For example, the first column, (m0, m1, m2) is the left(X) axis,
the second column, (m4, m5, m6) is the up(Y) axis,
the third column, (m8, m9, 10) is the forward(Z) axis,
and the right most column, (m12, m13, m14) is for the translation.

Here is an image for better understanding;

OpenGL Matrix

You can keep the own structure in your class, but you need to transpose the matrix data when you pass them to OpenGL. OpenGL also provides functions for this; glLoadTransposeMatrix{fd}() and glMultTransposeMatrix{fd}().

##### Share on other sites
Thanks very much everyone for your help. I finally got it to work as a result. Now my skeletal animation is working like a charm. I basically had columns as rows, and rows as columns (base on the visual diagram). I don't know how long I'd have struggled with this alone. Thanks so much!

• 9
• 10
• 12
• 10
• 10
• ### Similar Content

• Good Evening,
I want to make a 2D game which involves displaying some debug information. Especially for collision, enemy sights and so on ...
First of I was thinking about all those shapes which I need will need for debugging purposes: circles, rectangles, lines, polygons.
I am really stucked right now because of the fundamental question:
Where do I store my vertices positions for each line (object)? Currently I am not using a model matrix because I am using orthographic projection and set the final position within the VBO. That means that if I add a new line I would have to expand the "points" array and re-upload (recall glBufferData) it every time. The other method would be to use a model matrix and a fixed vbo for a line but it would be also messy to exactly create a line from (0,0) to (100,20) calculating the rotation and scale to make it fit.
If I proceed with option 1 "updating the array each frame" I was thinking of having 4 draw calls every frame for the lines vao, polygons vao and so on.
In addition to that I am planning to use some sort of ECS based architecture. So the other question would be:
Should I treat those debug objects as entities/components?
For me it would make sense to treat them as entities but that's creates a new issue with the previous array approach because it would have for example a transform and render component. A special render component for debug objects (no texture etc) ... For me the transform component is also just a matrix but how would I then define a line?
Treating them as components would'nt be a good idea in my eyes because then I would always need an entity. Well entity is just an id !? So maybe its a component?
Regards,
LifeArtist
• By QQemka
Hello. I am coding a small thingy in my spare time. All i want to achieve is to load a heightmap (as the lowest possible walking terrain), some static meshes (elements of the environment) and a dynamic character (meaning i can move, collide with heightmap/static meshes and hold a varying item in a hand ). Got a bunch of questions, or rather problems i can't find solution to myself. Nearly all are deal with graphics/gpu, not the coding part. My c++ is on high enough level.
Let's go:
Heightmap - i obviously want it to be textured, size is hardcoded to 256x256 squares. I can't have one huge texture stretched over entire terrain cause every pixel would be enormous. Thats why i decided to use 2 specified textures. First will be a tileset consisting of 16 square tiles (u v range from 0 to 0.25 for first tile and so on) and second a 256x256 buffer with 0-15 value representing index of the tile from tileset for every heigtmap square. Problem is, how do i blend the edges nicely and make some computationally cheap changes so its not obvious there are only 16 tiles? Is it possible to generate such terrain with some existing program?
Collisions - i want to use bounding sphere and aabb. But should i store them for a model or entity instance? Meaning i have 20 same trees spawned using the same tree model, but every entity got its own transformation (position, scale etc). Storing collision component per instance grats faster access + is precalculated and transformed (takes additional memory, but who cares?), so i stick with this, right? What should i do if object is dynamically rotated? The aabb is no longer aligned and calculating per vertex min/max everytime object rotates/scales is pretty expensive, right?
Drawing aabb - problem similar to above (storing aabb data per instance or model). This time in my opinion per model is enough since every instance also does not have own vertex buffer but uses the shared one (so 20 trees share reference to one tree model). So rendering aabb is about taking the model's aabb, transforming with instance matrix and voila. What about aabb vertex buffer (this is more of a cosmetic question, just curious, bumped onto it in time of writing this). Is it better to make it as 8 points and index buffer (12 lines), or only 2 vertices with min/max x/y/z and having the shaders dynamically generate 6 other vertices and draw the box? Or maybe there should be just ONE 1x1x1 cube box template moved/scaled per entity?
What if one model got a diffuse texture and a normal map, and other has only diffuse? Should i pass some bool flag to shader with that info, or just assume that my game supports only diffuse maps without fancy stuff?
There were several more but i forgot/solved them at time of writing
• By RenanRR
Hi All,
I'm reading the tutorials from learnOpengl site (nice site) and I'm having a question on the camera (https://learnopengl.com/Getting-started/Camera).
I always saw the camera being manipulated with the lookat, but in tutorial I saw the camera being changed through the MVP arrays, which do not seem to be camera, but rather the scene that changes:
#version 330 core layout (location = 0) in vec3 aPos; layout (location = 1) in vec2 aTexCoord; out vec2 TexCoord; uniform mat4 model; uniform mat4 view; uniform mat4 projection; void main() { gl_Position = projection * view * model * vec4(aPos, 1.0f); TexCoord = vec2(aTexCoord.x, aTexCoord.y); } then, the matrix manipulated:
..... glm::mat4 projection = glm::perspective(glm::radians(fov), (float)SCR_WIDTH / (float)SCR_HEIGHT, 0.1f, 100.0f); ourShader.setMat4("projection", projection); .... glm::mat4 view = glm::lookAt(cameraPos, cameraPos + cameraFront, cameraUp); ourShader.setMat4("view", view); .... model = glm::rotate(model, glm::radians(angle), glm::vec3(1.0f, 0.3f, 0.5f)); ourShader.setMat4("model", model);
So, some doubts:
- Why use it like that?
- Is it okay to manipulate the camera that way?
-in this way, are not the vertex's positions that changes instead of the camera?
- I need to pass MVP to all shaders of object in my scenes ?

What it seems, is that the camera stands still and the scenery that changes...
it's right?

Thank you

• Sampling a floating point texture where the alpha channel holds 4-bytes of packed data into the float. I don't know how to cast the raw memory to treat it as an integer so I can perform bit-shifting operations.

int rgbValue = int(textureSample.w);//4 bytes of data packed as color
// algorithm might not be correct and endianness might need switching.
vec3 extractedData = vec3(  rgbValue & 0xFF000000,  (rgbValue << 8) & 0xFF000000, (rgbValue << 16) & 0xFF000000);
extractedData /= 255.0f;

• While writing a simple renderer using OpenGL, I faced an issue with the glGetUniformLocation function. For some reason, the location is coming to be -1.
Anyone has any idea .. what should I do?