Sign in to follow this  

OpenGL Reason why opengl transform in reverse?

This topic is 2901 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Is it because when I have a point P in space. If I rotate, then scale it. OpenGl reads it like this P' = Rotate()P P'' = Scale()P' P'' = Scale()Rotate()P So starting from left it scale then rotate point P. Can anyone here enlighten me. Thank you very much.

Share this post


Link to post
Share on other sites
OpenGL matrices aer columnn major, when multiplynig two column major matrices A and B you hey: A*B in this order. When you multiply a vertex by A*B you get
A*B*v, this means that matrix B will affect first the vertex v and then matrix A will contribute to v, in this order.

When applying transformations like glRotatef you multiply the stack matrix with the rotate matrix and so for example you obtain this:

current stack matrix = I3.

glRotatef(90, 0,1,0); I3 * rotate
glScalef(1,1,0.5); I3 * rotate * scale;

And that's how the matrices are applied in reverse order considering the order you type the transformations.

[Edited by - Deliverance on February 7, 2010 8:32:43 AM]

Share this post


Link to post
Share on other sites
The order of transformation is just an interpretation of intermediate steps of a sequence of several transformations, and nothing related to OpenGL.

If you look at an object and transformations as being applied to the object's local coordinate system, they behave as if the transformations are applied individually in reverse order as they appear in the code. You can also look at transformations from a global coordinate system perspective, and then the objects appear to be transformed in forward order as they appear in the code.

So it is only about how you interpret the code and transformations. As far as OpenGL is concerned, there is a single matrix, and coordinates are multiplied by that matrix (considering the object and viewpoint transforms, so the modelview matrix concept only).

There are only two things that concerns OpenGL; that is the initial coordinate and the final transformed coordinate. If you need to interpret each individual steps, you concern yourself with intermediate steps, but OpenGL doesn't.

Share this post


Link to post
Share on other sites
Mathematicians transform points as OpenGL does. So, it's DirectX who do it in reverse. Even if we usually work with affine transformations, which can be represented by 4x4 matrices, general transformations aren't matrices. They are functions from a space to another. A simple example of non-affine transformation is this. So the composition of transformations should follow the conventions of composition of functions: if you have two functions f and g their composition is the function (f.g)(x) = f(g(x)). In your case, if S is the scaling and R the rotation then (S.R)(P) = S(R(P)) which is the rotation followed by the scaling. Hope it make sense for you.

Share this post


Link to post
Share on other sites
That way you can build hierarchical renderings:

For example you have a car with four wheels.

In real life (I mean the non-reversed order):

transform_wheel (its rotation, and steering)
transform_wheel position (put the 4 wheels to the 4 corners)
transformation_of_car_in_world
camera_transformation.


So if you want to draw the wheels, you have to traverse trough all transformations to get the final transformation.
If you want to draw the body of the car: traverse through the last 3 transforms, and so on.

in opengl:
camera_transformation
draw_world
transformation_of_car_in_world
draw_car_body
push
transform_wheel_1 position (put the 4 wheels to the 4 corners)
transform_wheel_1 (its rotation, and steering)
draw_first_wheel
pop
push
transform_wheel_2_position
transform_wheel_2 (its rotation, and steering)
draw_second_wheel
pop

...

So if you build the hierarchy well, you don't have to traverse through all transformations for every single objects, only have to multiply with the local transformation of the current object, and use the matrix stack (push/pop).

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Similar Content

    • By _OskaR
      Hi,
      I have an OpenGL application but without possibility to wite own shaders.
      I need to perform small VS modification - is possible to do it in an alternative way? Do we have apps or driver modifictions which will catch the shader sent to GPU and override it?
    • By xhcao
      Does sync be needed to read texture content after access texture image in compute shader?
      My simple code is as below,
      glUseProgram(program.get());
      glBindImageTexture(0, texture[0], 0, GL_FALSE, 3, GL_READ_ONLY, GL_R32UI);
      glBindImageTexture(1, texture[1], 0, GL_FALSE, 4, GL_WRITE_ONLY, GL_R32UI);
      glDispatchCompute(1, 1, 1);
      // Does sync be needed here?
      glUseProgram(0);
      glBindFramebuffer(GL_READ_FRAMEBUFFER, framebuffer);
      glFramebufferTexture2D(GL_READ_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,
                                     GL_TEXTURE_CUBE_MAP_POSITIVE_X + face, texture[1], 0);
      glReadPixels(0, 0, kWidth, kHeight, GL_RED_INTEGER, GL_UNSIGNED_INT, outputValues);
       
      Compute shader is very simple, imageLoad content from texture[0], and imageStore content to texture[1]. Does need to sync after dispatchCompute?
    • By Jonathan2006
      My question: is it possible to transform multiple angular velocities so that they can be reinserted as one? My research is below:
      // This works quat quaternion1 = GEQuaternionFromAngleRadians(angleRadiansVector1); quat quaternion2 = GEMultiplyQuaternions(quaternion1, GEQuaternionFromAngleRadians(angleRadiansVector2)); quat quaternion3 = GEMultiplyQuaternions(quaternion2, GEQuaternionFromAngleRadians(angleRadiansVector3)); glMultMatrixf(GEMat4FromQuaternion(quaternion3).array); // The first two work fine but not the third. Why? quat quaternion1 = GEQuaternionFromAngleRadians(angleRadiansVector1); vec3 vector1 = GETransformQuaternionAndVector(quaternion1, angularVelocity1); quat quaternion2 = GEQuaternionFromAngleRadians(angleRadiansVector2); vec3 vector2 = GETransformQuaternionAndVector(quaternion2, angularVelocity2); // This doesn't work //quat quaternion3 = GEQuaternionFromAngleRadians(angleRadiansVector3); //vec3 vector3 = GETransformQuaternionAndVector(quaternion3, angularVelocity3); vec3 angleVelocity = GEAddVectors(vector1, vector2); // Does not work: vec3 angleVelocity = GEAddVectors(vector1, GEAddVectors(vector2, vector3)); static vec3 angleRadiansVector; vec3 angularAcceleration = GESetVector(0.0, 0.0, 0.0); // Sending it through one angular velocity later in my motion engine angleVelocity = GEAddVectors(angleVelocity, GEMultiplyVectorAndScalar(angularAcceleration, timeStep)); angleRadiansVector = GEAddVectors(angleRadiansVector, GEMultiplyVectorAndScalar(angleVelocity, timeStep)); glMultMatrixf(GEMat4FromEulerAngle(angleRadiansVector).array); Also how do I combine multiple angularAcceleration variables? Is there an easier way to transform the angular values?
    • By dpadam450
      I have this code below in both my vertex and fragment shader, however when I request glGetUniformLocation("Lights[0].diffuse") or "Lights[0].attenuation", it returns -1. It will only give me a valid uniform location if I actually use the diffuse/attenuation variables in the VERTEX shader. Because I use position in the vertex shader, it always returns a valid uniform location. I've read that I can share uniforms across both vertex and fragment, but I'm confused what this is even compiling to if this is the case.
       
      #define NUM_LIGHTS 2
      struct Light
      {
          vec3 position;
          vec3 diffuse;
          float attenuation;
      };
      uniform Light Lights[NUM_LIGHTS];
       
       
    • By pr033r
      Hello,
      I have a Bachelor project on topic "Implenet 3D Boid's algorithm in OpenGL". All OpenGL issues works fine for me, all rendering etc. But when I started implement the boid's algorithm it was getting worse and worse. I read article (http://natureofcode.com/book/chapter-6-autonomous-agents/) inspirate from another code (here: https://github.com/jyanar/Boids/tree/master/src) but it still doesn't work like in tutorials and videos. For example the main problem: when I apply Cohesion (one of three main laws of boids) it makes some "cycling knot". Second, when some flock touch to another it scary change the coordination or respawn in origin (x: 0, y:0. z:0). Just some streng things. 
      I followed many tutorials, change a try everything but it isn't so smooth, without lags like in another videos. I really need your help. 
      My code (optimalizing branch): https://github.com/pr033r/BachelorProject/tree/Optimalizing
      Exe file (if you want to look) and models folder (for those who will download the sources):
      http://leteckaposta.cz/367190436
      Thanks for any help...

  • Popular Now