Sign in to follow this  

OpenGL The maths behind the rotations

This topic is 1905 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hello forum,
can someone explain me what the maths behind the rotations in OpenGL are?

I attached a picture of a rotation around the X-axis, so that you know what kind of maths that I mean.

Share this post


Link to post
Share on other sites
Are you familiar with matrix multiplication? If not I suggest you have a look into it. Something that also helped me understand rotations was doing simple vector rotations (cameras are a good example).

Edit:

Possibly learning how to use cos and sin to move around the bounds of the unit circle would be a good start. http://en.wikipedia.org/wiki/Unit_circle

If you are standing in the middle of the unit circle looking directly right (x = 1, y = 0) and say you want to rotate 45 degrees you would get those coordinates like this

[CODE]
x = cos(45.0);
y = sin(45.0);
[/CODE]

This will give you the new direction at 45 degrees from the original direction. Think about it, cos(90) gives a value of 0. This is correct because if we rotated 90 degrees, the x component of the vector (direction) would be 0, because it would be pointing directly up (x = 0, y = 1).

Hope this helps a bit, rotations are very tricky to start off learning. Edited by rocklobster

Share this post


Link to post
Share on other sites
you have attached a transformation, where p' is the transformed vector of p by the rotation matrix that you just showed
as stated above you'll need to understand the maths behind this
you can't just take shots in the dark with this and learn only the basics, because this knowledge is crucial to game development

the matrix shows you a homogenous transformation matrix, a basic rotation around x-axis in a 2D system R[sub]2[/sub]
hence the 3rd column (tx, ty, w), where w is the homogenous coordinate
you can learn all about this in many tutorials on the internet with a quick search
this isn't something that helpful people on gamedev can explain to you in 1-10 posts [img]http://public.gamedev.net//public/style_emoticons/default/smile.png[/img] Edited by Kaptein

Share this post


Link to post
Share on other sites
If you start with an identity matrix:[CODE]
1 0 0
0 1 0
0 0 1

[/CODE]That will simply scale x with 1, y with 1 and z with 1. If you look at the next matrix, with only one number:
[CODE]0 1 0
0 0 0
0 0 0[/CODE]
This will scale y with 1 and add it to x. But x was scaled with 0 this time, so the only things that remains from this transformation is the value of y moved to x, while y and z is cleared. Now look at this:
[CODE]0 0 0
1 0 0
0 0 0[/CODE]
It is almost the same thing, but it will scale y with 1 and copy it to x, while scaling the old y with 0. So this time, y is moved to x. This is possible to combine. The following:
[CODE]0 1 0
1 0 0
0 0 0[/CODE]
Will thus swap x and y. Now we are getting closer to a rotation. Rotating 90 degrees around the z axis (in a right handed coordinate system) is the same as moving x to y while moving -y to x. That would be:
[CODE]0 -1 0
1 0 0
0 0 0[/CODE]
If you set alpha to 90 degrees in your example, you will see that this is what you get.

Share this post


Link to post
Share on other sites
Ok, this is really a topic, wich can't be explained in 1-10 post. I was learning the whole day about vectors, scalars, matrices and so on. Now I got some good knowledge and in a few hours I will understand this completly. Nevertheless, I understood those replys and they were helpful.
Thank you. Edited by bigdilliams

Share this post


Link to post
Share on other sites
There are some additional key things about matrices you should also learn, as you are probably going to use them:[list]
[*]Matrix multiplication is associative, but not commutative.
[*]A transformation matrix can combine rotation, scaling and translation at the same time, for a single matrix.
[*]Matrix multiplication has the effect as if the rightmost transform is done first. To understand what the matrices A, B and C is going to do to vector v in A*B*C*v, you can see it as the transformation in C is applied to v first, then B and last A.
[/list]
To compute A*B*C*v, it is possible to do M=(A*B*C), and then M*v. It will still preserve the interpretation in 3.

If you have an object you want to translate out into world coordinates and then rotate it, you have to multiply each vertex v as in T*R*v. If you would have done R*T*v, you would first translate it out to the "right position", but then rotate the whole translated vector.

Share this post


Link to post
Share on other sites
This answer is not specifically to the parner who made the question. Might even be for myself.



Man, the hardware just makes an illusion, after all. Don't get nuts about that or about how hard it is to grip the source of an specific equation/function as if you had to imagine it based on its results instead of how it works in a specific case, binded for a purpose (math alone is steryle but is amazing when joins its functions in a purpose to work as an [color=#ff0000][b]analogy of the nature - in this case, resulting graphical analogy in heterogeneous equations [/b][/color]).

What those funcions have worthing (trigonometric) is that they return something between a range in a fixed way (what comes x goes out y, always), bouncing from up to down (or left/right, back/forth, -1..1, sun and moon, black to white, whatever defined and represented as a - mostly required smooth- transition).

What you see in the screen is a composition resulting from a specific time changing parameters that result in a scene created ( I am not going out of the scope since the "bones" of the issue ARE the mathematical functions). A simulation as how hardware works is no magic but a bunch of tricks and those trigonometric functions are nothing but this:

[media]http://www.youtube.com/watch?feature=player_detailpage&v=s1eNjUgaB-g[/media]

ACTUALLY a bounce. ACTUALLY. Other kinds of functions represent (because they act as) other natural phenomenons (as a steady grow in 2 dimensions done by a square or three dimension done by a cube), going parallel (as simulation) to what happens in nature, chained to other functions based on natural results of natural interactions of the aspect (variable) you are seeking for. That is directed/aimed functionality (with functions being its "artifacts", not its purpose - despite being a purpose as language to computers).

You may not grip the math as a plotted dropping results of specific variables flowing thorugh functions to define spatial conditions because it renders too fast. If you could see the rendering not as fast, you could then see how the interactions lead the results into known ways of using trigonometry to simulate movement (riding a bicycle is somewhat like - you know the steps but speed makes it work). Don't fall into the functions as if they were an "identity of a transformation" by its declaration but take it slow to see that every piece of a function is an independent possibility bounded specifically to act as a part of a simulation of phenomenon either in literal meanings (direct rendering) or flow control (function direction/parameter). What you see in an algebric expression is an articulation or an identity to a natural event at last, in its minimal aspect known/identifiable.

There is no physical limit for simulation. [b]Don't get functions as fetish without understanding its limits and how they are connected and the pratical purpose.[/b]


The results you see is just the attenuations/vectorizations(perspective adjustments) of the same functions based on another variations, all computed in a way to APPEAR that there is a WORLD being drawn and not just "[b]chained reactions of known results based on observed motions in reality that goes in just ONE DIMENSION for EACH VARIABLE[/b], positioned in a 2 dimensional space. [color=#0000ff][b]2 (even for 3D graphics)[/b][/color].". Don't grip the world, the simulation is[b] PURE ARTIFACT of reality, made by mind[/b]. Don't go believing that there is a magic for "deep simulation" in the functions because there is no deep, just a parameter changing how a variable changes based on a simbology that turns a number into a fraction of another and THAT IS THE MAGIC: A number changing others based on its own value and by that change in especific and isolated variables, shifting up/down, reducing lengths of lines, affecting sizes, TURNING/TREATING X into/as Z, Z into/as Y, Y into/as X and causing the effect of rotation/flipping based on simple substitution (each matrix used can use [color=#ff0000][b]anything as anything[/b][/color], by any parameters based on any source - anything in this case is NUMBERS that can be [color=#ff0000]vectors/scalars or those "normals"[/color] that put z into z, x into x and y into y again [color=#ff0000][b]to close the circus[/b][/color]). I swear again: THERE IS NO REAL TRANSLATION, DEPTH, SCALE, ROTATION or whatever: what you see is a number changing another number to appear a scene, all based on an immitation of what would happen in reality (the "world" is just a relation between functions: [color=#006400][i]no matter how beautiful or weird an equation/function may appear because it doesn't work by itself and if the real condition from where it was concepted changes, the equation might change too - the conception of a function is the real art/synthesis, its utilization as graphics is, amazingly, [b]just artifact of math - in analogy to nature[/b][/i][/color]).

If algebra is like the sun for you, [b][color=#ff8c00]don't look at the sun[/color][/b]. See how it was made and you might understand (the many aspects of reality that math simulates and how they join to result in a variable).

Your vision shall not be the same as the Marionette (I hope) but the handler (who has nothing more than a few strings and sticks but, by his hability, can make a beautiful world to entertain people - math just have numbers and its transformations.. single or chained functions based on nature, or not):


[media]http://www.youtube.com/watch?feature=player_detailpage&v=SPBm8I7hoBQ[/media]


Sorry for my way of explaining, might not be good but, may help. Edited by Caburé

Share this post


Link to post
Share on other sites

This topic is 1905 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Similar Content

    • By xhcao
      Does sync be needed to read texture content after access texture image in compute shader?
      My simple code is as below,
      glUseProgram(program.get());
      glBindImageTexture(0, texture[0], 0, GL_FALSE, 3, GL_READ_ONLY, GL_R32UI);
      glBindImageTexture(1, texture[1], 0, GL_FALSE, 4, GL_WRITE_ONLY, GL_R32UI);
      glDispatchCompute(1, 1, 1);
      // Does sync be needed here?
      glUseProgram(0);
      glBindFramebuffer(GL_READ_FRAMEBUFFER, framebuffer);
      glFramebufferTexture2D(GL_READ_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,
                                     GL_TEXTURE_CUBE_MAP_POSITIVE_X + face, texture[1], 0);
      glReadPixels(0, 0, kWidth, kHeight, GL_RED_INTEGER, GL_UNSIGNED_INT, outputValues);
       
      Compute shader is very simple, imageLoad content from texture[0], and imageStore content to texture[1]. Does need to sync after dispatchCompute?
    • By Jonathan2006
      My question: is it possible to transform multiple angular velocities so that they can be reinserted as one? My research is below:
      // This works quat quaternion1 = GEQuaternionFromAngleRadians(angleRadiansVector1); quat quaternion2 = GEMultiplyQuaternions(quaternion1, GEQuaternionFromAngleRadians(angleRadiansVector2)); quat quaternion3 = GEMultiplyQuaternions(quaternion2, GEQuaternionFromAngleRadians(angleRadiansVector3)); glMultMatrixf(GEMat4FromQuaternion(quaternion3).array); // The first two work fine but not the third. Why? quat quaternion1 = GEQuaternionFromAngleRadians(angleRadiansVector1); vec3 vector1 = GETransformQuaternionAndVector(quaternion1, angularVelocity1); quat quaternion2 = GEQuaternionFromAngleRadians(angleRadiansVector2); vec3 vector2 = GETransformQuaternionAndVector(quaternion2, angularVelocity2); // This doesn't work //quat quaternion3 = GEQuaternionFromAngleRadians(angleRadiansVector3); //vec3 vector3 = GETransformQuaternionAndVector(quaternion3, angularVelocity3); vec3 angleVelocity = GEAddVectors(vector1, vector2); // Does not work: vec3 angleVelocity = GEAddVectors(vector1, GEAddVectors(vector2, vector3)); static vec3 angleRadiansVector; vec3 angularAcceleration = GESetVector(0.0, 0.0, 0.0); // Sending it through one angular velocity later in my motion engine angleVelocity = GEAddVectors(angleVelocity, GEMultiplyVectorAndScalar(angularAcceleration, timeStep)); angleRadiansVector = GEAddVectors(angleRadiansVector, GEMultiplyVectorAndScalar(angleVelocity, timeStep)); glMultMatrixf(GEMat4FromEulerAngle(angleRadiansVector).array); Also how do I combine multiple angularAcceleration variables? Is there an easier way to transform the angular values?
    • By dpadam450
      I have this code below in both my vertex and fragment shader, however when I request glGetUniformLocation("Lights[0].diffuse") or "Lights[0].attenuation", it returns -1. It will only give me a valid uniform location if I actually use the diffuse/attenuation variables in the VERTEX shader. Because I use position in the vertex shader, it always returns a valid uniform location. I've read that I can share uniforms across both vertex and fragment, but I'm confused what this is even compiling to if this is the case.
       
      #define NUM_LIGHTS 2
      struct Light
      {
          vec3 position;
          vec3 diffuse;
          float attenuation;
      };
      uniform Light Lights[NUM_LIGHTS];
       
       
    • By pr033r
      Hello,
      I have a Bachelor project on topic "Implenet 3D Boid's algorithm in OpenGL". All OpenGL issues works fine for me, all rendering etc. But when I started implement the boid's algorithm it was getting worse and worse. I read article (http://natureofcode.com/book/chapter-6-autonomous-agents/) inspirate from another code (here: https://github.com/jyanar/Boids/tree/master/src) but it still doesn't work like in tutorials and videos. For example the main problem: when I apply Cohesion (one of three main laws of boids) it makes some "cycling knot". Second, when some flock touch to another it scary change the coordination or respawn in origin (x: 0, y:0. z:0). Just some streng things. 
      I followed many tutorials, change a try everything but it isn't so smooth, without lags like in another videos. I really need your help. 
      My code (optimalizing branch): https://github.com/pr033r/BachelorProject/tree/Optimalizing
      Exe file (if you want to look) and models folder (for those who will download the sources):
      http://leteckaposta.cz/367190436
      Thanks for any help...

    • By Andrija
      I am currently trying to implement shadow mapping into my project , but although i can render my depth map to the screen and it looks okay , when i sample it with shadowCoords there is no shadow.
      Here is my light space matrix calculation
      mat4x4 lightViewMatrix; vec3 sun_pos = {SUN_OFFSET * the_sun->direction[0], SUN_OFFSET * the_sun->direction[1], SUN_OFFSET * the_sun->direction[2]}; mat4x4_look_at(lightViewMatrix,sun_pos,player->pos,up); mat4x4_mul(lightSpaceMatrix,lightProjMatrix,lightViewMatrix); I will tweak the values for the size and frustum of the shadow map, but for now i just want to draw shadows around the player position
      the_sun->direction is a normalized vector so i multiply it by a constant to get the position.
      player->pos is the camera position in world space
      the light projection matrix is calculated like this:
      mat4x4_ortho(lightProjMatrix,-SHADOW_FAR,SHADOW_FAR,-SHADOW_FAR,SHADOW_FAR,NEAR,SHADOW_FAR); Shadow vertex shader:
      uniform mat4 light_space_matrix; void main() { gl_Position = light_space_matrix * transfMatrix * vec4(position, 1.0f); } Shadow fragment shader:
      out float fragDepth; void main() { fragDepth = gl_FragCoord.z; } I am using deferred rendering so i have all my world positions in the g_positions buffer
      My shadow calculation in the deferred fragment shader:
      float get_shadow_fac(vec4 light_space_pos) { vec3 shadow_coords = light_space_pos.xyz / light_space_pos.w; shadow_coords = shadow_coords * 0.5 + 0.5; float closest_depth = texture(shadow_map, shadow_coords.xy).r; float current_depth = shadow_coords.z; float shadow_fac = 1.0; if(closest_depth < current_depth) shadow_fac = 0.5; return shadow_fac; } I call the function like this:
      get_shadow_fac(light_space_matrix * vec4(position,1.0)); Where position is the value i got from sampling the g_position buffer
      Here is my depth texture (i know it will produce low quality shadows but i just want to get it working for now):
      sorry because of the compression , the black smudges are trees ... https://i.stack.imgur.com/T43aK.jpg
      EDIT: Depth texture attachment:
      glTexImage2D(GL_TEXTURE_2D, 0,GL_DEPTH_COMPONENT24,fbo->width,fbo->height,0,GL_DEPTH_COMPONENT,GL_FLOAT,NULL); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, fbo->depthTexture, 0);
  • Popular Now