Sign in to follow this  

OpenGL Help me unify camera and object matrices

This topic is 1767 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Let me explain.  Opengl defines the X+ as to the right, Y+ as up, and Z- into the screen.  I've gotten used to this coordinate system, and it's easy to think in.  Except I have 1 issue with it, I've gotten myself stuck with 2 definitions of rotation matrices in my code, and I'm trying to unify them.

I don't have a problem with translation, so for now let's only consider the 3x3 rotation section of the matrix.

Suppose I want to draw an object, unrotated.  The 3x3 rotation matrix is just identity:

1 0 0
0 1 0
0 0 1

I also represent the camera's current rotation as a matrix.  I form the camera matrix out of the 'look', 'up', and 'right' vectors.  Except, the unrotated camera looks into -z, so the unrotated camera matrix is:

1 0 0         //right (+X)
0 1 0         //up  (+Y)
0 0 -1        //look (+Z)

I have a few dumb negative signs in my code so that when I'm applying a camera matrix, the above matrix acts like identity.  But I want to unify everything, I want applying a matrix to just be applying a matrix.  So I have a few options:

1. Switch to right handed:  if +Z went into the screen, an unrotated camera, and an unrotated object would have the same identity matrix.

2. Don't form a camera matrix with up, left right (edit: brain fog/typo), and look vectors.  Instead use up, left, and 'ass' vectors.  Then the problem goes away, but I need to negate the backwards vector to get which way a camera is looking.

3. Something else?

How does everyone else solve or avoid this problem?


Edited by DracoLacertae

Share this post

Link to post
Share on other sites
First of all, your camera matrix's up, left, and look vectors should be the columns, not the rows. That's probably one issue you might be having.

The main issue is that the direction of the matrix's transformation is different for objects and cameras. For objects, the matrix should transform points from object to world space. For cameras, the 'camera object matrix' transforms from camera space to world space, while what you want is the inverse camera matrix, the view matrix which transforms from world to camera space.

You will find that the camera's object matrix, not the view matrix will be the identity matrix when there is no rotation. If you respect the transformation directions and make sure that you are using right-handed coordinates to compute your viewing directions, you shouldn't have any inconsistencies.

In my engine, the camera's look vector is indeed the negation of the camera's object matrix's 3rd column. Edited by Aressera

Share this post

Link to post
Share on other sites

Thanks for the reply.


First of all, your camera matrix's up, left, and look vectors should be the columns, not the rows. That's probably one issue you might be having.


Isn't it the opposite?  Look at at gluLookAt:


It appears to:


f =  normalize( center - eye )   (making F the 'look' vector)

Then does s = f x UP (making s the 'right' vector'


And creates the matrix:


s0    s1   s2   0        //right

u0   u1   u2   0        //up

-f0   -f1  -f2   0        // -look

0      0     0    1


Opengl orders in column major, so in RAM:

float matrix = { 
s0, u0 -f0, 0
s1, u1, -f1, 0
s2, u2, -f2, 0


Is this correct?

Share this post

Link to post
Share on other sites

The reason you're getting confused by that page is because the 4x4 matrix they are creating is the inverse of the camera's rotation matrix. Since for an orthogonal matrix its inverse is its transpose, they have their vectors in rows, indicating the inverse transformation.


Your code above looks correct to me.

Share this post

Link to post
Share on other sites
Sign in to follow this  

  • Similar Content

    • By xhcao
      Does sync be needed to read texture content after access texture image in compute shader?
      My simple code is as below,
      glBindImageTexture(0, texture[0], 0, GL_FALSE, 3, GL_READ_ONLY, GL_R32UI);
      glBindImageTexture(1, texture[1], 0, GL_FALSE, 4, GL_WRITE_ONLY, GL_R32UI);
      glDispatchCompute(1, 1, 1);
      // Does sync be needed here?
      glBindFramebuffer(GL_READ_FRAMEBUFFER, framebuffer);
                                     GL_TEXTURE_CUBE_MAP_POSITIVE_X + face, texture[1], 0);
      glReadPixels(0, 0, kWidth, kHeight, GL_RED_INTEGER, GL_UNSIGNED_INT, outputValues);
      Compute shader is very simple, imageLoad content from texture[0], and imageStore content to texture[1]. Does need to sync after dispatchCompute?
    • By Jonathan2006
      My question: is it possible to transform multiple angular velocities so that they can be reinserted as one? My research is below:
      // This works quat quaternion1 = GEQuaternionFromAngleRadians(angleRadiansVector1); quat quaternion2 = GEMultiplyQuaternions(quaternion1, GEQuaternionFromAngleRadians(angleRadiansVector2)); quat quaternion3 = GEMultiplyQuaternions(quaternion2, GEQuaternionFromAngleRadians(angleRadiansVector3)); glMultMatrixf(GEMat4FromQuaternion(quaternion3).array); // The first two work fine but not the third. Why? quat quaternion1 = GEQuaternionFromAngleRadians(angleRadiansVector1); vec3 vector1 = GETransformQuaternionAndVector(quaternion1, angularVelocity1); quat quaternion2 = GEQuaternionFromAngleRadians(angleRadiansVector2); vec3 vector2 = GETransformQuaternionAndVector(quaternion2, angularVelocity2); // This doesn't work //quat quaternion3 = GEQuaternionFromAngleRadians(angleRadiansVector3); //vec3 vector3 = GETransformQuaternionAndVector(quaternion3, angularVelocity3); vec3 angleVelocity = GEAddVectors(vector1, vector2); // Does not work: vec3 angleVelocity = GEAddVectors(vector1, GEAddVectors(vector2, vector3)); static vec3 angleRadiansVector; vec3 angularAcceleration = GESetVector(0.0, 0.0, 0.0); // Sending it through one angular velocity later in my motion engine angleVelocity = GEAddVectors(angleVelocity, GEMultiplyVectorAndScalar(angularAcceleration, timeStep)); angleRadiansVector = GEAddVectors(angleRadiansVector, GEMultiplyVectorAndScalar(angleVelocity, timeStep)); glMultMatrixf(GEMat4FromEulerAngle(angleRadiansVector).array); Also how do I combine multiple angularAcceleration variables? Is there an easier way to transform the angular values?
    • By dpadam450
      I have this code below in both my vertex and fragment shader, however when I request glGetUniformLocation("Lights[0].diffuse") or "Lights[0].attenuation", it returns -1. It will only give me a valid uniform location if I actually use the diffuse/attenuation variables in the VERTEX shader. Because I use position in the vertex shader, it always returns a valid uniform location. I've read that I can share uniforms across both vertex and fragment, but I'm confused what this is even compiling to if this is the case.
      #define NUM_LIGHTS 2
      struct Light
          vec3 position;
          vec3 diffuse;
          float attenuation;
      uniform Light Lights[NUM_LIGHTS];
    • By pr033r
      I have a Bachelor project on topic "Implenet 3D Boid's algorithm in OpenGL". All OpenGL issues works fine for me, all rendering etc. But when I started implement the boid's algorithm it was getting worse and worse. I read article ( inspirate from another code (here: but it still doesn't work like in tutorials and videos. For example the main problem: when I apply Cohesion (one of three main laws of boids) it makes some "cycling knot". Second, when some flock touch to another it scary change the coordination or respawn in origin (x: 0, y:0. z:0). Just some streng things. 
      I followed many tutorials, change a try everything but it isn't so smooth, without lags like in another videos. I really need your help. 
      My code (optimalizing branch):
      Exe file (if you want to look) and models folder (for those who will download the sources):
      Thanks for any help...

    • By Andrija
      I am currently trying to implement shadow mapping into my project , but although i can render my depth map to the screen and it looks okay , when i sample it with shadowCoords there is no shadow.
      Here is my light space matrix calculation
      mat4x4 lightViewMatrix; vec3 sun_pos = {SUN_OFFSET * the_sun->direction[0], SUN_OFFSET * the_sun->direction[1], SUN_OFFSET * the_sun->direction[2]}; mat4x4_look_at(lightViewMatrix,sun_pos,player->pos,up); mat4x4_mul(lightSpaceMatrix,lightProjMatrix,lightViewMatrix); I will tweak the values for the size and frustum of the shadow map, but for now i just want to draw shadows around the player position
      the_sun->direction is a normalized vector so i multiply it by a constant to get the position.
      player->pos is the camera position in world space
      the light projection matrix is calculated like this:
      mat4x4_ortho(lightProjMatrix,-SHADOW_FAR,SHADOW_FAR,-SHADOW_FAR,SHADOW_FAR,NEAR,SHADOW_FAR); Shadow vertex shader:
      uniform mat4 light_space_matrix; void main() { gl_Position = light_space_matrix * transfMatrix * vec4(position, 1.0f); } Shadow fragment shader:
      out float fragDepth; void main() { fragDepth = gl_FragCoord.z; } I am using deferred rendering so i have all my world positions in the g_positions buffer
      My shadow calculation in the deferred fragment shader:
      float get_shadow_fac(vec4 light_space_pos) { vec3 shadow_coords = / light_space_pos.w; shadow_coords = shadow_coords * 0.5 + 0.5; float closest_depth = texture(shadow_map, shadow_coords.xy).r; float current_depth = shadow_coords.z; float shadow_fac = 1.0; if(closest_depth < current_depth) shadow_fac = 0.5; return shadow_fac; } I call the function like this:
      get_shadow_fac(light_space_matrix * vec4(position,1.0)); Where position is the value i got from sampling the g_position buffer
      Here is my depth texture (i know it will produce low quality shadows but i just want to get it working for now):
      sorry because of the compression , the black smudges are trees ...
      EDIT: Depth texture attachment:
  • Popular Now