# OpenGL Quaternions and a rotating ball

This topic is 4532 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

Hey, I'm having some problems to get a ball rotating. I have a ball moving along the XZ-plane which I want to get rotating nicely. This leads me to what seems to be one of the most usual problems regarding rotating (glrotatef since I use opengl): it's kinda hard to rotate something around two world-axis (non object-local). I've read about the quaternions on a few sites and finally decided to do as in the NeHe-tutorial (http://nehe.gamedev.net/data/lessons/lesson.asp?lesson=Quaternion_Camera_Class) which seems ideal for this. What I do is that I calculate how far the ball has moved in the Z and X-direction and translates that to how many degrees the ball have rotated. I keep track of these angles of rotation about the absolute X and Z-axis. Since two rotations won't do it I create to quaternions, one for each axis:
GLfloat Matrix[16];
glQuaternion result, xaxis, zaxis;
xaxis.CreateFromAxisAngle(1.0, 0.0, 0.0, player->getRotationX());
zaxis.CreateFromAxisAngle(0.0, 0.0, 1.0, player->getRotationZ());
result = xaxis*zaxis;
result.CreateMatrix(Matrix);
glMultMatrixf(Matrix);
But I still get odd rotations. For example, when the rotation around the X-axis is 90 degrees and I move along the X-axis the ball rotates around the Y-axis, which is exactly what happens when you use glrotatef twice (since the Z-axis points downwards after the rotation around the X-axis). I've been trying to get these silly quaternions to work the last 4 hrs and I'm starting to get kinda irritated on them :) Does anyone have any ideas?

##### Share on other sites
Quote:
 Original post by noofGLfloat Matrix[16];glQuaternion result, xaxis, zaxis;xaxis.CreateFromAxisAngle(1.0, 0.0, 0.0, player->getRotationX());zaxis.CreateFromAxisAngle(0.0, 0.0, 1.0, player->getRotationZ());result = xaxis*zaxis;result.CreateMatrix(Matrix);glMultMatrixf(Matrix);But I still get odd rotations. For example, when the rotation around the X-axis is 90 degrees and I move along the X-axis the ball rotates around the Y-axis, which is exactly what happens when you use glrotatef twice (since the Z-axis points downwards after the rotation around the X-axis). I've been trying to get these silly quaternions to work the last 4 hrs and I'm starting to get kinda irritated on them :)
Using quaternions is gaining you absolutely nothing here. In short, if one can't make something work with matrices, they probably won't be able to get it to work with quaternions; the latter has some advantages over the former, but the fundamental properties of the two representations are the same.

So I'd drop the quaternions for now. As for getting the ball to roll in a somewhat realistic way, here's what occurs to me (short of an actual physics-based simulation). The axis of the ball's rotation is perpendicular to a) the ball's direction of motion, and b) the normal of the surface on which it is moving. In this case it is always uniquely defined, so you can just take the normalized cross product of the velocity and up vectors. Then, you can find the amount of rotation from the ball's circumference and the distance it's traveled. You can acumulate this angle over time and then feed the axis-angle pair to glRotate().

There might be other issues involved if your ball changes direction. This is a place where your quat class may come in handy (not because it has any special properties with respect to matrices, but rather just because you already have it available).

##### Share on other sites
Making a quaternion from euler angles, like you're doing there, is sort of like making a fancy decorated bavarian custard from spoiled milk. It's still going to taste bad. Use quaternions (or matrices) to keep track of the orientation of the ball. Don't use the "xaxis, zaxis" thing. Don't use euler angles anywhere.

##### Share on other sites
Quote:
 Original post by noofGLfloat Matrix[16];glQuaternion result, xaxis, zaxis;xaxis.CreateFromAxisAngle(1.0, 0.0, 0.0, player->getRotationX());zaxis.CreateFromAxisAngle(0.0, 0.0, 1.0, player->getRotationZ());result = xaxis*zaxis;result.CreateMatrix(Matrix);glMultMatrixf(Matrix);

Just to chime in, my first reaction to this code was "wow, this is incredibly... pointless". You probably fell for the silly "quaternions are the magical solution to all your rotation problems" and decided to just throw some in without being explained or looking up what they are, how they work and why they work.

So you create two quaternions, which are doing exactly the same thing as matrices would have and end up with a completely wasteful way to do:
glRotate(a,1,0,0);
glRotate(b,0,0,1);
just that now there is a lot of useless back and forth between all possible representations.

As pointed out above, your problem isn't using quaterions or converting Euler angles to quaternions, but the fact that you are using Euler angles at all. They are extremely useless for just about everything that isn't a typical shooter style first person camera. NEVER try and store an orientation as Euler angles, there is just no useful way to add additional rotations, unless you start converting them (ALL angles) to a (SINGLE) matrix/quaternion, apply the new rotation to that and convert back. And at this point you should realize that storing a matrix or quaternion instead of constantly converting from/to a useless representation is saving you a lot of work and trouble.

##### Share on other sites
We all have to start somewhere :)

Quote:
 Original post by jykThen, you can find the amount of rotation from the ball's circumference and the distance it's traveled. You can acumulate this angle over time and then feed the axis-angle pair to glRotate().

I actually tried that one before I found out about quaternion which makes it kinda hard to accumulate the rotation (unless you use matrices, which I didn't think of)

Quote:
 Original post by TriencoYou probably fell for the silly "quaternions are the magical solution to all your rotation problems" and decided to just throw some in without being explained or looking up what they are, how they work and why they work.

100% correct, although I started realizing that quaternions aren't as magic as I thought when I thought about them and couldn't grasp why they would solve my problem (just thought that was part of the other silly "quaternions are so hard to understand that you don't wanna waste your time doing it")

So, after reading your thoughts (thanks a lot for them btw!) I'm thinking of the following solution:
1) cross velocity vector with up-vector (plane normal) to get the vector the ball is spinning around
2) get number of degrees the ball has rotatet around that vector
3) convert that spin to a quaternion like:
glQuaternion rot;rot.CreateFromAxisAngle(degrees, spinvector.x, spinvector.y, spinvector.z);

4) "add" that rotation to the rotation accumulator:
total = total*rot;

5) rotate the ball:
GLfloat Matrix[16];total .CreateMatrix(Matrix);glMultMatrixf(Matrix);

Does this sound any better? :)

[Edited by - noof on January 20, 2006 4:54:11 AM]

##### Share on other sites
Quote:
 Original post by noof(just thought that was part of the other silly "quaternions are so hard to understand that you don't wanna waste your time doing it")

That depends on how deeply you want to understand them. But usually we only care about unit quaternions, so reducing it to axis and angle can already get you somewhere. Completely without worrying too much about all the implications of having complex numbers with not just i, but also j,k and all their properties.

Quote:
 I'm thinking of the following solution:

If I get you right you plan to store the total rotation/orientation as a quaternion and just add (well, multiply) the rotation for each update. That would be pretty much the way to do it.

Keep in mind that quaternions are smaller, not as problematic as deorthonormalizing matrices and quat-quat multiplication is a bit cheaper. But transforming something by a quaternion is a good bit more expensive. So they are usually only of interest if you really need to save memory or expect to concatenate lots of transformations so the cheaper multiplication is worth the more expensive transformation.

##### Share on other sites
Quote:
 Original post by TriencoIf I get you right you plan to store the total rotation/orientation as a quaternion and just add (well, multiply) the rotation for each update.
Yeah, that's the solution I'm thinking about trying later today.

Quote:
 Original post by TriencoKeep in mind that quaternions are smaller, not as problematic as deorthonormalizing matrices and quat-quat multiplication is a bit cheaper. But transforming something by a quaternion is a good bit more expensive. So they are usually only of interest if you really need to save memory or expect to concatenate lots of transformations so the cheaper multiplication is worth the more expensive transformation.

Ok, I don't think it'll make a huge difference in my project, but it always good to know :)

Quote:
 Original post by TriencoBut usually we only care about unit quaternions

That makes me think of another question. Do I have to normalize the quaternion after doing the "multiplication" ? like:
total = total*rot;total = total / sqrt(w^2+x^2+y^2+z^2)

because I think only x^2+y^2+z^2=1 and not w^2+x^2+y^2+z^2=1, since I will get a unit vector as the "vector part" of the quaternion, but the angle will be arbitrary, or?

##### Share on other sites
Quote:
 Original post by noofThat makes me think of another question. Do I have to normalize the quaternion after doing the "multiplication" ?

Mathematically? No, the result should also be a unit quaternion.
Technically? Every once in a while, because discrete representations of real numbers (ie. float/double) will introduce small errors that can add up. As sqrt can be kind of expensive it might be worth to check if a conditional might be better (or just normalize after every 100 or whatever multiplications). And yes, you normalize the whole thing, not just part of it (which means you can drop one -preferably positive- component if you really need every byte you can get).

##### Share on other sites
Tried the new idea and I still have the same problem. In every frame I do (rotation is the rotation accumulator as a quaternion, velocity is the normalized ball velocity vector):
Vector3D spinAxis = velocity.crossProduct(Vector3D(0,1,0)); // velocity is normalizedfloat rotationAngle = <calculates angle here>;if(!rotation) {  // first frame  rotation = new Quaternion(rotationAngle, spinAxis.getX(), spinAxis.getY(), spinAxis.getZ());} else {  // other frames  *rotation = (*rotation)*Quaternion(rotationAngle, spinAxis.getX(), spinAxis.getY(), spinAxis.getZ());}rotation->normalize(); // just to be sureglMultMatrixf(...);
Which gives me exactly the same results :( But when I think of it I would be kinda surprised if it had worked. Let's say for example that I have a really sucky frame rate and start by rolling 90 degrees of the ball along the Z-axis before frame 1. Then I turn around 90 degrees and roll the same distance along the X-axis before frame 2. In the first frame I will have:
rotation = Quaternion(90, 1, 0, 0);
and in the next frame:
rotation = rotation*Quaternion(90, 0, 0, 1);
which seems to be exactly what I had in my first attempt to use quaternions. I'm obviously missing something here, any more ideas? :)

*edit* It's working now, YEY! :D And it looks damn sweet :) I replaced:
  *rotation = (*rotation)*Quaternion(rotationAngle, spinAxis.getX(), spinAxis.getY(), spinAxis.getZ());
with:
  *rotation = Quaternion(rotationAngle, spinAxis.getX(), spinAxis.getY(), spinAxis.getZ())*(*rotation);
If someone has an explanation for why it should be that way and not the other, feel free to post a reply. (And thanks for all the help, really appreciated!)

[Edited by - noof on January 20, 2006 10:51:21 AM]

##### Share on other sites
Because, as you noticed, neither matrix nor quaternion multiplication is commutative. Order matters and in this case is making the difference between rotating around the local axis (after all previous rotations have been applied) and the global axis (before previous rotations were applied). As you calculate your rotation axis in world coordinates (as far as I can tell), you also need to multiply the other way round. But which one is "right" pretty much depends on whether you swapped quaternions in your multiplication function or not. I would guess some people do, simply because it allows to use quat*=other_quat instead of having to write quat=otherquat*quat all the time.

• ### Similar Content

• By nOoNEE
in the OpenGL Rendering Pipeline section there is a picture like this: link
but the question is this i dont really understand why it is necessary to turn pixel data in to fragment and then fragment into pixel could please give me a source or a clear Explanation that why it is necessary ? thank you so mu

• By Inbar_xz
I'm using the OPENGL with eclipse+JOGL.
My goal is to create movement of the camera and the player.
I create main class, which create some box in 3D and hold
an object of PlayerAxis.
I create PlayerAxis class which hold the axis of the player.
If we want to move the camera, then in the main class I call to
the func "cameraMove"(from PlayerAxis) and it update the player axis.
That's work good.
The problem start if I move the camera on 2 axis,
for example if I move with the camera right(that's on the y axis)
and then down(on the x axis) -
in some point the move front is not to the front anymore..
In order to move to the front, I do
player.playerMoving(0, 0, 1);
And I learn that in order to keep the front move,
I need to convert (0, 0, 1) to the player axis, and then add this.
I think I dont do the convert right..
I will be glad for help!

Here is part of my PlayerAxis class:

//player coordinate float x[] = new float[3]; float y[] = new float[3]; float z[] = new float[3]; public PlayerAxis(float move_step, float angle_move) { x[0] = 1; y[1] = 1﻿; z[2] = -1; step = move_step; angle = angle_move; setTransMatrix(); } public void cameraMoving(float angle_step, String axis) { float[] new_x = x; float[] new_y = y; float[] new_z = z; float alfa = angle_step * angle; switch(axis) { case "x": new_z = addVectors(multScalar(z, COS(alfa)), multScalar(y, SIN(alfa))); new_y = subVectors(multScalar(y, COS(alfa)), multScalar(z, SIN(alfa))); break; case "y": new_x = addVectors(multScalar(x, COS(alfa)), multScalar(z, SIN(alfa))); new_z = subVectors(multScalar(z, COS(alfa)), multScalar(x, SIN(alfa))); break; case "z": new_x = addVectors(multScalar(x, COS(alfa)), multScalar(y, SIN(alfa))); new_y = subVectors(multScalar(y, COS(alfa)), multScalar(x, SIN(alfa))); }﻿ x = new_x; y = new_y; z = new_z; normalization(); } public void playerMoving(float x_move, float y_move, float z_move) { float[] move = new float[3]; move[0] = x_move; move[1] = y_move; move[2] = z_move; setTransMatrix(); float[] trans_move = transVector(move); position[0] = position[0] + step*trans_move[0]; position[1] = position[1] + step*trans_move[1]; position[2] = position[2] + step*trans_move[2]; } public void setTransMat﻿rix() { for (int i = 0; i < 3; i++) { coordiTrans[0][i] = x[i]; coordiTrans[1][i] = y[i]; ﻿coordiTrans[2][i] = z[i]; } } public float[] transVector(float[] v) { return multiplyMatrixInVector(coordiTrans, v); }
and in the main class i have this:

public void keyPressed(KeyEvent e) { if (e.getKeyCode()== KeyEvent.VK_ESCAPE) { System.exit(0); //player move } else if (e.getKeyCode()== KeyEvent.VK_W) { //front //moveAmount[2] += -0.1f; player.playerMoving(0, 0, 1); } else if (e.getKeyCode()== KeyEvent.VK_S) { //back //moveAmount[2] += 0.1f; player.playerMoving(0, 0, -1); } else if (e.getKeyCode()== KeyEvent.VK_A) { //left //moveAmount[0] += -0.1f; player.playerMoving(-1, 0, 0); } else if (e.getKeyCode()== KeyEvent.VK_D) { //right //moveAmount[0] += 0.1f; player.playerMoving(1, 0, 0); } else if (e.getKeyCode()== KeyEvent.VK_E) { //moveAmount[0] += 0.1f; player.playerMoving(0, 1, 0); } else if (e.getKeyCode()== KeyEvent.VK_Q) { //moveAmount[0] += 0.1f; player.playerMoving(0, -1, 0); //camera move } else if (e.getKeyCode()== KeyEvent.VK_I) { //up player.cameraMoving(1, "x"); } else if (e.getKeyCode()== KeyEvent.VK_K) { //down player.cameraMoving(-1, "x"); } else if (e.getKeyCode()== KeyEvent.VK_L) { //right player.cameraMoving(-1, "y"); } else if (e.getKeyCode()== KeyEvent.VK_J) { //left player.cameraMoving(1, "y"); } else if (e.getKeyCode()== KeyEvent.VK_O) { //right round player.cameraMoving(-1, "z"); } else if (e.getKeyCode()== KeyEvent.VK_U) { //left round player.cameraMoving(1, "z"); } }
finallt found it.... i confused with the transformation matrix row and col. thanks anyway!
• By Lewa
So, i'm currently trying to implement an SSAO shader from THIS tutorial and i'm running into a few issues here.
Now, this SSAO method requires view space positions and normals. I'm storing the normals in my deferred renderer in world-space so i had to do a conversion and reconstruct the position from the depth buffer.
And something there goes horribly wrong (which has probably to do with worldspace to viewspace transformations).
(here is the full shader source code if someone wants to take a look at it)
Now, i suspect that the normals are the culprit.
vec3 normal = ((uNormalViewMatrix*vec4(normalize(texture2D(sNormals, vTexcoord).rgb),1.0)).xyz); "sNormals" is a 2D texture which stores the normals in world space in a RGB FP16 buffer.
Now i can't use the camera viewspace matrix to transform the normals into viewspace as the cameras position isn't set at (0,0,0), thus skewing the result.
So what i did is to create a new viewmatrix specifically for this normal without the position at vec3(0,0,0);
//"camera" is the camera which was used for rendering the normal buffer renderer.setUniform4m(ressources->shaderSSAO->getUniform("uNormalViewMatrix"), glmExt::createViewMatrix(glm::vec3(0,0,0),camera.getForward(),camera.getUp())//parameters are (position,forwardVector,upVector) ); Though i have the feeling this is the wrong approach. Is this right or is there a better/correct way of transforming a world space normal into viewspace?

• Hi,
I'm trying mix two textures using own shader system, but I have a problem (I think) with uniforms.
To debug I use RenderDocs, but I did not receive good results. In the first attachment is my result, in the second attachment is what should be.
PS. I base on this tutorial https://learnopengl.com/Getting-started/Textures.

• I'm having issues loading textures, as I'm clueless on how to handle / load images maybe I missing something, but the past few days I just google a lot to try to find a solution. Well theres two issues I think, one I'm using Kotlin Native (EAP) and OpenGL wrapper / STB image, so I'm not quite sure wheres the issue, if someone with more experience could give me some hints on how to solve this issue?
The code is here, if I'm not mistaken the workflow is pretty straight forward, stbi_load returns the pixels of the image (as char array or byte array) and you need to pass those pixels directly to glTexImage2D, so a I'm missing something here it seems.
Regards

• I've noticed in most post processing tutorials several shaders are used one after another: one for bloom, another for contrast, and so on. For example:
postprocessing.quad.bind() // Effect 1 effect1.shader.bind(); postprocessing.texture.bind(); postprocessing.quad.draw(); postprocessing.texture.unbind(); effect1.shader.unbind(); // Effect 2 effect2.shader.bind(); // ...and so on postprocessing.quad.unbind() Is this good practice, how many shaders can I bind and unbind before I hit performance issues? I'm afraid I don't know what the good practices are in open/webGL regarding binding and unbinding resources.
I'm guessing binding many shaders at post processing is okay since the scene has already been updated and I'm just working on a quad and texture at that moment. Or is it more optimal to put shader code in chunks and bind less frequently? I'd love to use several shaders at post though.
Another example of what I'm doing at the moment:
1) Loop through GameObjects, bind its phong shader (send color, shadow, spec, normal samplers), unbind all.
2) At post: bind post processor quad, and loop/bind through different shader effects, and so on ...
Thanks all!

• void collision(int v) { collision_bug_one(0.0f, 10.0f); glutPostRedisplay(); glutTimerFunc(1000, collision, 0); } void coll_sprite() { if (board[0][0] == 1) { collision(0); flag[0][0] = 1; } } void erase_sprite() { if (flag[0][0] == 1) { glColor3f(0.0f, 0.0f, 0.0f); glBegin(GL_POLYGON); glVertex3f(0.0f, 10.0f, 0.0f); glVertex3f(0.0f, 9.0f, 0.0f); glVertex3f(1.0f, 9.0f, 0.0f); glVertex3f(1.0f, 10.0f, 0.0f); glEnd(); } } I am using glutTimerFunc to wait a small amount of time to display a collision sprite before I black out the sprite. unfortunately my code only blacks out the said sprite without drawing the collision sprite, I have done a great deal of research on the glutTimerFunc and  animation.
• By Lewa
So, i stumbled upon the topic of gamma correction.
So from what i've been able to gather: (Please correct me if i'm wrong)
Old CRT monitors couldn't display color linearly, that's why gamma correction was nessecary. Modern LCD/LED monitors don't have this issue anymore but apply gamma correction anyway. (For compatibility reasons? Can this be disabled?) All games have to apply gamma correction? (unsure about that) All textures stored in file formats (.png for example) are essentially stored in SRGB color space (as what we see on the monitor is skewed due to gamma correction. So the pixel information is the same, the percieved colors are just wrong.) This makes textures loaded into the GL_RGB format non linear, thus all lighting calculations are wrong You have to always use the GL_SRGB format to gamma correct/linearise textures which are in SRGB format
Now, i'm kinda confused how to proceed with applying gamma correction in OpenGL.
First of, how can i check if my Monitor is applying gamma correction? I noticed in my monitor settings that my color format is set to "RGB" (can't modify it though.) I'm connected to my PC via a HDMI cable. I'm also using the full RGB range (0-255, not the 16 to ~240 range)

What i tried to do is to apply a gamma correction shader shown in the tutorial above which looks essentially like this: (it's a postprocess shader which is applied at the end of the renderpipeline)
vec3 gammaCorrection(vec3 color){ // gamma correction color = pow(color, vec3(1.0/2.2)); return color; } void main() { vec3 color; vec3 tex = texture2D(texture_diffuse, vTexcoord).rgb; color = gammaCorrection(tex); outputF = vec4(color,1.0f); } The results look like this:
No gamma correction:
With gamma correction:

The colors in the gamma corrected image look really wased out. (To the point that it's damn ugly. As if someone overlayed a white half transparent texture. I want the colors to pop.)
Do i have to change the textures from GL_RGB to GL_SRGB in order to gamma correct them in addition to applying the post process gamma correction shader? Do i have to do the same thing with all FBOs? Or is this washed out look the intended behaviour?

• Hi

I am trying to program shadow volumes and i stumbled upon an artifact which i can not find the cause for.
I generate the shadow volumes using a geometry shader with reversed extrusion (projecting the lightfacing triangles to infinity) and write the stencil buffer according to z-fail. The base of my code is the "lighting" chapter from learnopengl.com, where i extended the shader class to include geometry shader. I also modified the "lightingshader" to draw the ambient pass when "pass" is set to true and the diffuse/ specular pass when set to false. For easier testing i added a view controls to switch on/off the shadow volumes' color rendering or to change the cubes' position, i made the lightnumber controllable and changed the diffuse pass to render green for easier visualization of my problem.

The first picture shows the rendered scene for one point light, all cubes and the front cube's shadow volume is the only one created (intentional). Here, all is rendered as it should be with all lit areas green and all areas inside the shadow volume black (with the volume's sides blended over).

If i now turn on the shadow volumes for all the other cubes, we get a bit of a mess, but its also obvious that some areas that were in shadow before are now erroneously lit (for example the first cube to the right from the originaly shadow volumed cube). From my testing the areas erroneously lit are the ones where more than one shadow volume marks the area as shadowed.

To check if a wrong stencil buffer value caused this problem i decided to change the stencil function for the diffuse pass to only render if the stencil is equal to 2. As i repeated this approach with different values for the stencil function i found out that if i set the value equal to 1 or any other uneven value the lit and shadowed areas are inverted and if i set it to 0 or any other even value i get the results shown above.
This lead me to believe that the value and thus the stencil buffer values may be clamped to [0,1] which would also explain the artifact, because twice in shadow would equal in no shadow at all, but from what i found on the internet and from what i tested with
GLint stencilSize = 0; glGetFramebufferAttachmentParameteriv(GL_DRAW_FRAMEBUFFER, GL_STENCIL, GL_FRAMEBUFFER_ATTACHMENT_STENCIL_SIZE, &stencilSize); my stencilsize is 8 bit, which should be values within [0,255].
Does anyone know what might be the cause for this artifact or the confusing results with other stencil functions?

• Hi,
i am self teaching me graphics and oo programming and came upon this:
My Window class creates an input handler instance, the glfw user pointer is redirected to that object and methods there do the input handling for keyboard and mouse. That works. Now as part of the input handling i have an orbiting camera that is controlled by mouse movement. GLFW_CURSOR_DISABLED is set as proposed in the glfw manual. The manual says that in this case the cursor is automagically reset to the window's center. But if i don't reset it manually with glfwSetCursorPos( center ) mouse values seem to add up until the scene is locked up.
Here are some code snippets, mostly standard from tutorials:
// EventHandler m_eventHandler = new EventHandler( this, glm::vec3( 0.0f, 5.0f, 0.0f ), glm::vec3( 0.0f, 1.0f, 0.0f ) ); glfwSetWindowUserPointer( m_window, m_eventHandler ); m_eventHandler->setCallbacks(); Creation of the input handler during window creation. For now, the camera is part of the input handler, hence the two vectors (position, up-vector).  In future i'll take that functionally out into an own class that inherits from the event handler.
void EventHandler::setCallbacks() { glfwSetCursorPosCallback( m_window->getWindow(), cursorPosCallback ); glfwSetKeyCallback( m_window->getWindow(), keyCallback ); glfwSetScrollCallback( m_window->getWindow(), scrollCallback ); glfwSetMouseButtonCallback( m_window->getWindow(), mouseButtonCallback ); } Set callbacks in the input handler.
// static void EventHandler::cursorPosCallback( GLFWwindow *w, double x, double y ) { EventHandler *c = reinterpret_cast<EventHandler *>( glfwGetWindowUserPointer( w ) ); c->onMouseMove( (float)x, (float)y ); } Example for the cursor pos callback redirection to a class method.
// virtual void EventHandler::onMouseMove( float x, float y ) { if( x != 0 || y != 0 ) { // @todo cursor should be set automatically, according to doc if( m_window->isCursorDisabled() ) glfwSetCursorPos( m_window->getWindow(), m_center.x, m_center.y ); // switch up/down because its more intuitive m_yaw += m_mouseSensitivity * ( m_center.x - x ); m_pitch += m_mouseSensitivity * ( m_center.y - y ); // to avoid locking if( m_pitch > 89.0f ) m_pitch = 89.0f; if( m_pitch < -89.0f ) m_pitch = -89.0f; // Update Front, Right and Up Vectors updateCameraVectors(); } } // onMouseMove() Mouse movement processor method. The interesting part is the manual reset of the mouse position that made the thing work ...
// straight line distance between the camera and look at point, here (0,0,0) float distance = glm::length( m_target - m_position ); // Calculate the camera position using the distance and angles float camX = distance * -std::sin( glm::radians( m_yaw ) ) * std::cos( glm::radians( m_pitch) ); float camY = distance * -std::sin( glm::radians( m_pitch) ); float camZ = -distance * std::cos( glm::radians( m_yaw ) ) * std::cos( glm::radians( m_pitch) ); // Set the camera position and perspective vectors m_position = glm::vec3( camX, camY, camZ ); m_front = glm::vec3( 0.0, 0.0, 0.0 ) - m_position; m_up = m_worldUp; m_right = glm::normalize( glm::cross( m_front, m_worldUp ) ); glm::lookAt( m_position, m_front, m_up ); Orbiting camera vectors calculation in updateCameraVectors().
Now, for my understanding, as the glfw manual explicitly states that if cursor is disabled then it is reset to the center, but my code only works if it is reset manually, i fear i am doing something wrong. It is not world moving (only if there is a world to render :-)), but somehow i am curious what i am missing.

I am not a professional programmer, just a hobbyist, so it may well be that i got something principally wrong :-)
And thanks for any hints and so ...

• 34
• 12
• 10
• 9
• 9
• ### Forum Statistics

• Total Topics
631354
• Total Posts
2999503
×